[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PCI pass-through problem for SN570 NVME SSD


  • To: "G.R." <firemeteor@xxxxxxxxxxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 4 Jul 2022 17:33:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bWpUYK+xbI4BrR4Ykhb4sS2WAiMcJyVblXe8eD8qp6U=; b=Jyl7WjSMI9VOP50oegxrO4ClIzs5xoETn103HvCS2WpWdL0lzVsyS+MyLFvVus2Ezd7t7NUUmvVhfzuBvBYOi5UGpj/KpTqw9JivupJhQfC4zStwK7Jp8ID/nm4o+n4CY82RaiNQHAfBhbAboWfhqhBCgJOAdfZTy898wxbTCbVDJD4gZIskYo4wn4w6GJ8sMJBHRaAILxK0ScwaVnpHjPn4y3uZbPZI52dkx7LBLTfGmKhGiypsFZh1VzX1L4eOzX/vRrodGKSfB8f+HYnbbbFzgxcI/sc4gCFKDMSxg1nVmMwgzb5+MYhHvky/o+Oe4dpHhqag4LseASsrzApPGA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WjgDWuyEENMHqIiCJ2IJsS47Qq3xNLBcGaYgGLNmSKJ+OG79Z3YaYSkwAXZ/jARbwVuE6T1gb/gN0FZ4X+qxL4bUIZs0nOjINFW5StSVVDkrxgRSpyNNrshBlNCwL53dgaop9vZcvEY0QAQBr0dvbHXUc1/9LCVlU5N+cynYSBXvRT6zAULPo3PBdsYOfQ12F3OUTXRrVO/oPuDIfVYbg5H1r5PRHNwq6Kvkl3T72xj+XulKFv0vunVm+aB6t5ZIGehiCi27n70E3xwAiThByH17Fb94G2LY4Ymdq2l5kAizG5XYOT76ErybO3DpWA96b3dGq24yTdlWYqG7AfU3iw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 04 Jul 2022 15:33:25 +0000
  • Ironport-data: A9a23:uPjTSqJ8gvT4ojQYFE+RuZQlxSXFcZb7ZxGr2PjKsXjdYENSgWMEz mUaD2/QaK7fMzenKd1/b9u39UlQ75GHzt5hTgJlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokf0/0vrav67xZVF/fngqoDUUYYoAQgsA14+IMsdoUg7wbRh3dYz2YLR7z6l4 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4 I1/5bmzeCAyBOrVoMcZDToJGRNmNIQTrdcrIVDn2SCS52vvViK1ht5JVQQxN4Be/ftrC2ZT8 /BeMCoKch2Im+OxxvS8V/VogcMgasLsOevzuFk5lW2fUalgHM6FGvqQjTNb9G5YasRmB/HRa tBfcTNyRB/BfwdOKhEcD5dWcOKA2SSnL2UD9wv9Sawf/3rDxi923YPXLJnlKoPQS90Mp1yVq TeTl4j+KlRAXDCF8hKL82ihg+LTkCThcJ8JGaejsOVtnUeYy2IUEhIbE122vZGRmkO4Ht5SN UEQ0i4vtrQpslymSJ/6RRLQnZKflhsVWt4VGOpj7giIk/PQ+1zAWTJCSSNdYts7ssNwXSYty lKCg9LuA3poraGRTnWesLyTqFteJBQoEIPLXgdcJSNt3jUpiNpbYs7nJjq7LJOIsw==
  • Ironport-hdrordr: A9a23:+wjyg6D6PBQI/w7lHeg3sceALOsnbusQ8zAXPh9KJCC9I/bzqy nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d enl7x6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr18jegIK5Y1n3X nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvS/lQFwhNvqxEchkd HKrRtlF8Nv60nJdmXwmhfp0xmI6kda11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza NixQuixtZqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW/HAbgcYa dT5fznlbdrmQvwVQGYgoAv+q3nYp0LJGbIfqBY0fblkAS/nxhCvjklLYIk7zU9HakGOuh5Dt T/Q9pVfY51P78rhIJGdZM8qJiMexvwaCOJFl6uCnLaM4xCE07xivfMkcYIDaeRCdc18Kc=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jul 04, 2022 at 10:51:53PM +0800, G.R. wrote:
> On Mon, Jul 4, 2022 at 9:09 PM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> > >
> > > 05:00.0 Non-Volatile memory controller: Sandisk Corp Device 501a (prog-if 
> > > 02 [NVM Express])
> > >       Subsystem: Sandisk Corp Device 501a
> > >       Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
> > > Stepping- SERR- FastB2B- DisINTx+
> > >       Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> > > <TAbort- <MAbort- >SERR- <PERR- INTx-
> > >       Latency: 0, Cache Line Size: 64 bytes
> > >       Interrupt: pin A routed to IRQ 16
> > >       NUMA node: 0
> > >       IOMMU group: 13
> > >       Region 0: Memory at a2600000 (64-bit, non-prefetchable) [size=16K]
> > >       Region 4: Memory at a2604000 (64-bit, non-prefetchable) [size=256]
> >
> > I think I'm slightly confused, the overlapping happens at:
> >
> > (XEN) d1: GFN 0xf3078 (0xa2616,0,5,7) -> (0xa2504,0,5,7) not permitted
> >
> > So it's MFNs 0xa2616 and 0xa2504, yet none of those are in the BAR
> > ranges of this device.
> >
> > Can you paste the lspci -vvv output for any other device you are also
> > passing through to this guest?
> >
> 
> I just realized that the address may change in different environments.
> In previous email chains, I used a cached dump from a Linux
> environment running outside the hypervisor.
> Sorry for the confusion. Refreshing with a XEN dom0 dump.
> 
> The other device I used is a SATA controller. I think I can get what
> you are looking for now.
> Both a2616 and a2504 are found!
> 
> 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI
> Controller (rev 10) (prog-if 01 [AHCI 1.0])
>         DeviceName: Onboard - SATA
>         Subsystem: Gigabyte Technology Co., Ltd Cannon Lake PCH SATA
> AHCI Controller
>         Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx-
>         Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium
> >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Interrupt: pin A routed to IRQ 16
>         Region 0: Memory at a2610000 (32-bit, non-prefetchable) [size=8K]
>         Region 1: Memory at a2616000 (32-bit, non-prefetchable) [size=256]
>         Region 2: I/O ports at 4090 [size=8]
>         Region 3: I/O ports at 4080 [size=4]
>         Region 4: I/O ports at 4060 [size=32]
> 
> 05:00.0 Non-Volatile memory controller: Sandisk Corp Device 501a
> (prog-if 02 [NVM Express])
>         Subsystem: Sandisk Corp Device 501a
>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx-
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Latency: 0, Cache Line Size: 64 bytes
>         Interrupt: pin A routed to IRQ 11
>         Region 0: Memory at a2500000 (64-bit, non-prefetchable) [size=16K]
>         Region 4: Memory at a2504000 (64-bit, non-prefetchable) [size=256]

Right, so hvmloader attempts to place a BAR from 05:00.0 and a BAR
from 00:17.0 into the same page, which is not that good behavior.  It
might be sensible to attempt to share the page if both BARs belong to
the same device, but not if they belong to different devices.

I think the following patch:

https://lore.kernel.org/xen-devel/20200117110811.43321-1-roger.pau@xxxxxxxxxx/

Might help with this.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.