|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen virtual IOMMU high level design doc V2
Hi Andrew:
Thanks for your review.
On 2016年10月19日 03:17, Andrew Cooper wrote:
On 18/10/16 15:14, Lan Tianyu wrote: Sorry, it's a typo. Also, grammatically speaking, I think you mean "each vcpu to separate pcpus". Yes. 255 vcpus support requires X2APIC and Linux disables X2APIC mode if there is no interrupt remapping function which is present by vIOMMU. Interrupt remapping function helps to deliver interrupt to #vcpu >255.This is only a requirement for xapic interrupt sources. x2apic interrupt sources already deliver correctly. The key is the APIC ID. There is no modification to existing PCI MSI and IOAPIC with the introduction of x2apic. PCI MSI/IOAPIC can only send interrupt message containing 8bit APIC ID, which cannot address >255 cpus. Interrupt remapping supports 32bit APIC ID so it's necessary to enable >255 cpus with x2apic mode. If LAPIC is in x2apic while interrupt remapping is disabled, IOAPIC cannot deliver interrupts to all cpus in the system if #cpu > 255. 1.3 Support guest SVM (Shared Virtual Memory) It relies on the l1 translation table capability (GVA->GPA) on vIOMMU. pIOMMU needs to enable both l1 and l2 translation in nested mode (GVA->GPA->HPA) for passthrough device. IGD passthrough is the main usage today (to support OpenCL 2.0 SVM feature). In the future SVM might be used by other I/O devices too.As an aside, how is IGD intending to support SVM? Will it be with PCIe ATS/PASID, or something rather more magic as IGD is on the same piece of silicon? IGD on Skylake supports PCIe PASID.
No, this is for traditional virtual pci device emulated by Qemu and passthough PCI device.
Do you know what's the status of dmop now? I just found some discussions about design in the maillist. We may use domctl first and move to dmop when it's ready?
Ok. Will update.
The former one seems to move DMA operation into hypervisor but Qemu vIOMMU framework just passes IOVA to dummy xen-vIOMMU without input data and access length. I will dig more to figure out solution.
Ok. Will update. Definition of VIOMMU subops: #define XEN_SYSCTL_viommu_query_capability 0 #define XEN_SYSCTL_viommu_create 1 #define XEN_SYSCTL_viommu_destroy 2 #define XEN_SYSCTL_viommu_dma_translation_for_vpdev 3 Definition of VIOMMU capabilities #define XEN_VIOMMU_CAPABILITY_l1_translation (1 << 0) #define XEN_VIOMMU_CAPABILITY_l2_translation (1 << 1) #define XEN_VIOMMU_CAPABILITY_interrupt_remapping (1 << 2)How are vIOMMUs going to be modelled to guests? On real hardware, they all seem to end associated with a PCI device of some sort, even if it is just the LPC bridge. This design just considers one vIOMMU has all PCI device under its specified PCI Segment. "INCLUDE_PCI_ALL" bit of DRHD struct is set for vIOMMU. For multi-vIOMMU, we need to add new field in the struct iommu_op to designate device scope of vIOMMUs if they are under same PCI segment. This also needs to change DMAR table.
No, we don't need to trap all write to IO page table. From VTD spec 6.1, "Reporting the Caching Mode as Set for the virtual hardware requires the guest software to explicitly issue invalidation operations on the virtual hardware for any/all updates to the guest remapping structures.The virtualizing software may trap these guest invalidation operations to keep the shadow translation structures consistent to guest translation structure modifications, without resorting to other less efficient techniques." So any updates of IO page table will follow invalidation operation and we use them to do shadowing. Now all PCI devices in same hvm domain share one IO page table (GPA->HPA) in physical IOMMU driver of Xen. To support l2 translation of vIOMMU, IOMMU driver need to support multiple address spaces per device entry. Using existing IO page table(GPA->HPA) defaultly and switch to shadow IO page table(IOVA->HPA) when l2 translation function is enabled. These change will not affect current P2M logic. 3.3 Interrupt remapping Interrupts from virtual devices and physical devices will be delivered to vlapic from vIOAPIC and vMSI. It needs to add interrupt remapping hooks in the vmsi_deliver() and ioapic_deliver() to find target vlapic according interrupt remapping table. 3.4 l1 translation When nested translation is enabled, any address generated by l1 translation is used as the input address for nesting with l2 translation. Physical IOMMU needs to enable both l1 and l2 translation in nested translation mode(GVA->GPA->HPA) for passthrough device.All these l1 and l2 translations are getting confusing. Could we perhaps call them guest translation and host translation, or is that likely to cause other problems? Definitions of l1 and l2 translation from VTD spec. first-level translation to remap a virtual address to intermediate (guest) physical address. second-level translations to remap a intermediate physical address to machine (host) physical address. guest and host translation maybe not suitable for them? VT-d context entry points to guest l1 translation table which will be nest-translated by l2 translation table and so it can be directly linked to context entry of physical IOMMU. To enable l1 translation in VM 1) Xen IOMMU driver enables nested translation mode 2) Update GPA root of guest l1 translation table to context entry of physical IOMMU. All handles are in hypervisor and no interaction with Qemu. 3.5 Implementation consideration VT-d spec doesn't define a capability bit for the l2 translation. Architecturally there is no way to tell guest that l2 translation capability is not available. Linux Intel IOMMU driver thinks l2 translation is always available when VTD exits and fail to be loaded without l2 translation support even if interrupt remapping and l1 translation are available. So it needs to enable l2 translation first before other functions.What then is the purpose of the nested translation support bit in the extended capability register? It's to translate output GPA from first level translation(IOVA->GPA) to HPA. Detail please see VTD spec - 3.8 Nested Translation "When Nesting Enable (NESTE) field is 1 in extended-context-entries, requests-with-PASID translated through first-level translation are also subjected to nested second-level translation. Such extendedcontext- entries contain both the pointer to the PASID-table (which contains the pointer to the firstlevel translation structures), and the pointer to the second-level translation structures." 4.4 Report vIOMMU to hvmloader Hvmloader is in charge of building ACPI tables for Guest OS and OS probes IOMMU via ACPI DMAR table. So hvmloder needs to know whether vIOMMU is enabled or not and its capability to prepare ACPI DMAR table for Guest OS. There are three ways to do that. 1) Extend struct hvm_info_table and add variables in the struct hvm_info_table to pass vIOMMU information to hvmloader. But this requires to add new xc interface to use struct hvm_info_table in the Qemu. 2) Pass vIOMMU information to hvmloader via Xenstore 3) Build ACPI DMAR table in Qemu and pass it to hvmloader via Xenstore. This solution is already present in the vNVDIMM design(4.3.1 Building Guest ACPI Tables http://www.gossamer-threads.com/lists/xen/devel/439766). The third option seems more clear and hvmloader doesn't need to deal with vIOMMU stuffs and just pass through DMAR table to Guest OS. All vIOMMU specific stuffs will be processed in the dummy xen-vIOMMU driver.Part of ACPI table building has now moved into the toolstack. Unless the table needs creating dynamically (which doesn't appear to be the case), it can be done without any further communication. The DMAR table needs to be created according input parameters. .E,G When interrupt remapping is enabled, INTR_REMAP bit in the dmar structure needs to be set. So we need to create table dynamically during creating VM. -- Best regards Tianyu Lan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |