[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 0/6] IOMMU, vtd and iotlb flush rework (v5)
In one of my previous email I detailed a bug I was seeing when passing through a Intel GPU on a guest that has more that 4G or RAM. Allen suggested that I go for the Plan B but after a discussion with Tim we agreed that Plan B was way to disruptive in term of code change. This patch series implements Plan A. http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html changes between v4 and v5: - Fix hypercall continuation for add_to_physmap in compat mode. Changes between v3 and v4: - Move the loop for gmfn_range from arch_memory_op to xenmem_add_to_physmap. - Add a comment to comment to explain the purpose of iommu_dont_flush_iotlb. Changes between v2 and v3: - Check for the presence iotlb_flush_all callback before calling it. Changes between v1 and v2: - Move size in struct xen_add_to_physmap in padding between .domid and .space. - Store iommu_dont_flush per cpu - Change the code in hvmloader to relocate by batch of 64K, .size is now 16 bits. Jean Guyader (6): vtd: Refactor iotlb flush code iommu: Introduce iommu_flush and iommu_flush_all. add_to_physmap: Move the code for XENMEM_add_to_physmap. mm: New XENMEM space, XENMAPSPACE_gmfn_range hvmloader: Change memory relocation loop when overlap with PCI hole. Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb flush tools/firmware/hvmloader/pci.c | 20 +++++-- xen/arch/x86/mm.c | 82 ++++++++++++++++++++++------ xen/arch/x86/x86_64/compat/mm.c | 10 ++++ xen/drivers/passthrough/iommu.c | 25 +++++++++ xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++++++++++++--------------- xen/include/public/memory.h | 4 ++ xen/include/xen/iommu.h | 17 ++++++ 7 files changed, 192 insertions(+), 66 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |