[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/CPUID: suppress IOMMU related hypervisor leaf data


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 4 Jan 2021 15:56:40 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uck64sDDJvtXANtk7yZlrc+V5UynT/cxC7P43X4SiBo=; b=m+s31lnrR5cJGcXHT9RC6PB5ddXG00Xbgxo8XU4MK8f/1OQ+6eL846yrMLww8BeInB/NYALPE4Xydz7c2XyUygCmmWDkLv3xAyrd7xA7bkN2thMaGvLYk4iNAM/I981uC/HHn4xqR7dTMlfMKTsO84vUriDkNuaO+aQgr8OnG9p0vlXuXVG37haJ17gYTEkBPF1zZw5CmvGDIa3OrpXbi3e7wRLkYOs3VBMRaB7cm1WpKZxIZyZlukIs1/rl42JbCs6L1tqYJGZh1EMwG2lQfgUO6RUjaYCpoP3AH4wnk4pdTpdT+uOOvmX1x47mqp1OfKI9+gVAeXXs0Nhehe4O+g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Hm+atFD9RHgRX80I9JrIiRkxDotpyaNzZAV8/QkRG343Vk1PlCYGumeCXJ+2i3Lr3sKS8mAPmhD+C6YuNsl4DLnqTavgZm56Esx23hx62JFxiFzO3KjhkBLrWuy5PRLv4IfcVDd5teh3sOuP1FzsE9hN6ke7wV4e61NiMxr+FXA3maoqM5f4YmOgM+Cel1QfX2AEZQLh54wZ6lWY8lvJCm0sCue91uL35MwzcRFz6e2YwIGDjIkjBLN2ujZ0CDfRZfSQOemQesebsbCBwQUcMiUN/IKbz+CK778iJDRnfFZ5Qh7dh5jzsXWYgmZTTYvtrErpgH7f28CMVx6spLsbKg==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Mon, 04 Jan 2021 14:56:57 +0000
  • Ironport-sdr: 2dliIfz8kgL1ZypgNMbLSWlENo/obU4ymTmzeH8PyjiamdedY4W8WYtCshoHmZNg9u9UVA2g8L zzZe0biE7DhaSOx0xL89I1ZdbKDSWdI2fqRguq5+g1yDYSri9iOAZErtXketb2qVsJoM8z3ad0 Pmyu/bedmoMF5GEYb1D1k8XW+yDiblJNJ0+wF86QKGWqQ4up61Rold8MVvsJtcOWOiMTwcEc4H WbF2uFCMgzxwxw+IpG8O2+QrykcgeR8h2NdOrl0FtPiMoMFBneyU5MIbJ3yaQzjY9SClWCgtpC Gvs=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jan 04, 2021 at 02:43:52PM +0100, Jan Beulich wrote:
> On 28.12.2020 11:54, Roger Pau Monné wrote:
> > On Mon, Nov 09, 2020 at 11:54:09AM +0100, Jan Beulich wrote:
> >> Now that the IOMMU for guests can't be enabled "on demand" anymore,
> >> there's also no reason to expose the related CPUID bit "just in case".
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> > 
> > I'm not sure this is helpful from a guest PoV.
> > 
> > How does the guest know whether it has pass through devices, and thus
> > whether it needs to check if this flag is present or not in order to
> > safely pass foreign mapped pages (or grants) to the underlying devices?
> > 
> > Ie: prior to this change I would just check whether the flag is
> > present in CPUID to know whether FreeBSD needs to use a bounce buffer
> > in blkback and netback when running as a domU. If this is now
> > conditionally set only when the IOMMU is enabled for the guest I
> > also need to figure a way to know whether the domU has any passed
> > through device or not, which doesn't seem trivial.
> 
> I'm afraid I don't understand your concern and/or description of
> the scenario. Prior to the change, the bit was set unconditionally.
> To me, _that_ was making the bit useless - no point in checking
> something which is always set anyway (leaving aside old Xen
> versions).

This bit was used to differentiate between versions of Xen that don't
create IOMMU mappings for grants/foreign maps (and so are broken) vs
versions of Xen that do create such mappings. If the bit is not set
HVM domains need a bounce buffer in blkback/netback in order to avoid
sending grants to devices.

Now it's my understand that with this change sometimes the bit might
not be set not because we are running on an unfixed Xen version, but
because there's no IOMMU assigned to the domain, so the guest will
fallback to use a bounce buffer.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.