[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] xen/amd-iommu: Add interrupt remapping quirk for ath11k


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Jason Andryuk <jason.andryuk@xxxxxxx>
  • Date: Thu, 13 Mar 2025 14:03:57 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=s2OA9DMKEYxTYSOiKaFyfiw7EyS3kCkKTqCcx/4PT1g=; b=Rh0pBNh5hBjg/cX4TDLLnxixWvIpUBb+7JpYh2zegAWRq49zhtnxkbRegkTjv5/bYFjolA2Y9H9nU6Cm0AtsRwOzhcj1pUgZcmWOHUVbQ9MeZ1M7Ii14h9wWW97cnrHUnmVjuiJxTI0gkV9fEgng+mWDvrF36hleHJ60h5Ho9/Jp+7BTVt1w26u0XbDsCut+nz0zs7irPQG2tJQ8xhGXj/pf1LCCCJgi6q08gy4/BUXh+fdSyC+EbvPP7lKoDphfn5nldAISnQZHXj5ptBddyetI6slUIifSSczJAyaSkZRpPq0EHTY0vkgNUkY43DVzzvSMuYMn2n+jdoQuwawi8w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=tBvkhxpqPscEorPdzpkeVF74JjWDNuLjQ3pPLbaulDdm4NWadg13VD86iX8Aw3uQvtZweoVmhHTMe8mb0ePoMnoKRkuPtq0a7r/46U4XFeCGzVjHuYMlS1HjcKrPgQM29sjnMmUBGGZF9mg6quauuyjyoek9MT/pTsjB8qFfrKMsFbMq50SXqaZbBi2tDsF1IKiaciTCtXTAOZzTQfZP/F1dVatFXQjwk6DBwvOnvk3VTcs5uqlWx3aqA4SFws8BDIm1lRIsMzq7eGX9SSuY5ss7HN+KF6yztwExc6t/0weuvLJdhfzaJcXKDCIbXh+4z0qRxQTpIAqkq1ICqeJ/SQ==
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "Julien Grall" <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "Xenia Ragiadakou" <xenia.ragiadakou@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 13 Mar 2025 18:04:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2025-03-13 12:24, Andrew Cooper wrote:
On 13/03/2025 4:07 pm, Jan Beulich wrote:
On 13.03.2025 17:02, Andrew Cooper wrote:
On 13/03/2025 3:47 pm, Roger Pau Monné wrote:
On Thu, Mar 13, 2025 at 11:30:28AM -0400, Jason Andryuk wrote:
On 2025-02-27 05:23, Roger Pau Monné wrote:
On Wed, Feb 26, 2025 at 04:11:25PM -0500, Jason Andryuk wrote:
The ath11k device supports and tries to enable 32 MSIs.  Linux in PVH
dom0 and HVM domU fails enabling 32 and falls back to just 1, so that is
all that has been tested.
DYK why it fails to enable 32?
In Linux msi_capability_init()

         /* Reject multi-MSI early on irq domain enabled architectures */
         if (nvec > 1 && !pci_msi_domain_supports(dev,
MSI_FLAG_MULTI_PCI_MSI, ALLOW_LEGACY))
                 return 1;

MSI_FLAG_MULTI_PCI_MSI is only set for AMD and Intel interrupt remapping,
and Xen PVH and HVM don't have either of those.  They are using "VECTOR", so
this check fails.
Oh, interesting.  So classic PV MSI domain supports
MSI_FLAG_MULTI_PCI_MSI, even when no IOMMU is exposed there either.

I was told PV dom0 used 32 MSIs, but I don' readily see MSI_FLAG_MULTI_PCI_MSI set anywhere. I guess it gets to xen_initdom_setup_msi_irqs() which supports multiple nvecs.


Thanks, so it's nothing specific to Xen, just how Linux works.
This is something which TGLX and I have discussed in the past.  It is a
mistake for any x86 system to do MSI multi-message without an IOMMU.
Well, with PVH there always will be an IOMMU, just that Linux can't see
it. Even with PV it should be the hypervisor to determine whether multi-
message MSI is possible. Hence how the classic (non-pvops) kernel had
worked in this regard.

Xen should hide (and instruct Qemu to hide) multi-message on non-IOMMU
hardware.  The result of "supporting" them on non-IOMMU hardware is
worse than making the driver run in single MSI mode.

FWIW, QEMU MSI support hardcodes 1 MSI.

Regards,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.