[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/CPUID: suppress IOMMU related hypervisor leaf data
On 04/01/2021 14:56, Roger Pau Monné wrote: > On Mon, Jan 04, 2021 at 02:43:52PM +0100, Jan Beulich wrote: >> On 28.12.2020 11:54, Roger Pau Monné wrote: >>> On Mon, Nov 09, 2020 at 11:54:09AM +0100, Jan Beulich wrote: >>>> Now that the IOMMU for guests can't be enabled "on demand" anymore, >>>> there's also no reason to expose the related CPUID bit "just in case". >>>> >>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>> I'm not sure this is helpful from a guest PoV. >>> >>> How does the guest know whether it has pass through devices, and thus >>> whether it needs to check if this flag is present or not in order to >>> safely pass foreign mapped pages (or grants) to the underlying devices? >>> >>> Ie: prior to this change I would just check whether the flag is >>> present in CPUID to know whether FreeBSD needs to use a bounce buffer >>> in blkback and netback when running as a domU. If this is now >>> conditionally set only when the IOMMU is enabled for the guest I >>> also need to figure a way to know whether the domU has any passed >>> through device or not, which doesn't seem trivial. >> I'm afraid I don't understand your concern and/or description of >> the scenario. Prior to the change, the bit was set unconditionally. >> To me, _that_ was making the bit useless - no point in checking >> something which is always set anyway (leaving aside old Xen >> versions). > This bit was used to differentiate between versions of Xen that don't > create IOMMU mappings for grants/foreign maps (and so are broken) vs > versions of Xen that do create such mappings. If the bit is not set > HVM domains need a bounce buffer in blkback/netback in order to avoid > sending grants to devices. And remember that "send to devices" != "has an IOMMU". An HVM backend would need to bounce buffer to use an emulated NVME device, because there is no distinction between emulated and real DMA from the guest kernels point of view. The bit actually means "Xen doesn't screw up grant maps which requested a DMA mapping any more". ~Andrew P.S. If you wonder why I picked the NVME example, its because it turns out that NVME emulated by Qemu has better performance than the PV blk protocol.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |