[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 5/5] vPCI: re-init extended-capabilities when MMCFG availability changed


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 5 Mar 2026 10:19:38 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Hf9J4F468TSJFlAphny8iLQP1YTtiuyQKklzuuP/sIs=; b=LliZ/Ay6qcqi9jbn3dBR1hJ/i7X62VIFFEEIoKNeRS0t5i0lYfvhKQzOpJDI0FWSKaFuRDhsIgM0/tW9+l4mgHvoKzFdmV8lzFedtZycY7+UHXFOUbRG4xS+cEeMjee9KipMdcsVygzXwdcUICtS1a4FNFomcioxzBDOu2iQ1kv28RvRSBKYZAi+j6Vo9SjMkB9QnFpokGFpNG37raVWsMLnCCgwqD20es2L1rQM4EwnstpaFU//MV092KbrvhEFa42GeFcoJC0XxvqNm3W4FG5vNJtpyws3le0CH0Kla9Dv1YHXaQCIiv13veRyLHS0xBBhnZiXWBOAibooydrgNg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kDRXGtPnAvTe0dmgmvvL+M34WbJzW9eAZaLRs0ZkfvX55W5b1N3gHxE6B/x6ZDdb2SOFoeqqsjyu/AuUvKYG3xlq2Kkdw0Jho/nfEcfOUPw8t2s3HxexCdewDfx79UHG73Cb89xXTM3Yo7IUeYBU8qQZ1fh5vrnYsl3QtvjwB8Ew3LNLGZS8hc1O2AznhDchlER6t69y0bkgYxURL8X0gQg1npJpdQXFB3El4eSrUM9omXQRNq8hbjN5148X4+tk2kOpxCYPlb8I83B89X/ULiMUlevrlPPVBfaAZOanWzCjkoTGIBKo1b9EktHvImupKlwBsAvQLvP3BBYidFWdFA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Delivery-date: Thu, 05 Mar 2026 09:20:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Mar 05, 2026 at 09:40:32AM +0100, Jan Beulich wrote:
> On 04.03.2026 17:53, Roger Pau Monné wrote:
> > On Wed, Mar 04, 2026 at 04:39:00PM +0100, Jan Beulich wrote:
> >> On 04.03.2026 16:06, Roger Pau Monné wrote:
> >>> On Wed, Feb 25, 2026 at 12:44:44PM +0100, Jan Beulich wrote:
> >>>> @@ -349,22 +352,23 @@ int vpci_init_capabilities(struct pci_de
> >>>>      return 0;
> >>>>  }
> >>>>  
> >>>> -void vpci_cleanup_capabilities(struct pci_dev *pdev)
> >>>> +void vpci_cleanup_capabilities(struct pci_dev *pdev, bool ext_only)
> >>>>  {
> >>>
> >>> You could short-circuit the function here, ie:
> >>>
> >>> if ( ext_only && !is_hardware_domain(pdev->domain) )
> >>>     return;
> >>>
> >>> But I'm not sure that would simplify the code of the function much?
> >>> Likewise for vpci_init_capabilities().
> >>
> >> Such a short-circuit would need replacing / dropping once DomU support is
> >> added. I was hoping the chosen arrangement would make for a little less
> >> churn at that time. I'll listen to your advice, though, just that the
> >> question gives the impression you're not quite sure either.
> > 
> > Yeah, I wasn't fully sure.  IT would be nice if we could add those
> > short circuits now, and then once domU support is in place we just
> > remove teh shortcuts and it works for domU also.  But I fear more
> > changes will be needed anyway, at which point the short-circuit is
> > not that attractive to use.
> 
> As per your other request (calling ->cleanup() even for DomU-s) the use of
> is_hardware_domain() would go away anyway, and the function would be ready
> for use for DomU-s as well.

OK, so that one is (possibly) sorted then.

> >>>> +
> >>>> +    vpci_cleanup_capabilities(pdev, true);
> >>>> +
> >>>> +    if ( vpci_remove_registers(pdev->vpci, PCI_CFG_SPACE_SIZE,
> >>>> +                               PCI_CFG_SPACE_EXP_SIZE - 
> >>>> PCI_CFG_SPACE_SIZE) )
> >>>> +        ASSERT_UNREACHABLE();
> >>>
> >>> Ideally this would better be done the other way around.  We first
> >>> remove the handlers, and the cleanup the capabilities.  Just to ensure
> >>> no stray handler could end up having cached references to data that's
> >>> been freed by vpci_cleanup_capabilities().
> >>
> >> And maybe not just that: For the hwdom case cleanup_rebar() adds new 
> >> handlers,
> >> which we'd wrongly purge again right away. (Because we pass "false" for 
> >> "hide",
> >> this isn't an active issue right now.)
> >>
> >>> And we should take the write_lock(&pdev->domain->pci_lock).
> >>
> >> Now this is a request that I'm struggling with some. I can see that callers
> >> of vpci_{init,cleanup}_capabilities() assert that the lock is being held, 
> >> yet
> >> it's not quite clear to me why that's needed. Shouldn't vPCI internals all
> >> synchronize on the vPCI lock of the domain?
> > 
> > Right, the callers of the handlers already hold the locks, and the
> > removal of the handlers should also hold the locks.  The point of
> > taking the d->pci_lock is to avoid the device from being removed
> > while there are vPCI accesses against it being done.  The vPCI lock is
> > fine for vPCI internals, but functions that deal with addition or
> > removal of devices need the d->pci_lock to avoid races with possibly
> > freeing pdev->vpci while in use.
> > 
> > I think you are right, and for the usage here (that doesn't add or
> > remove pdev->vpci itself), the internal vPCI lock should be enough.
> 
> Well, we could take two positions: Either we say that as we're being called
> from a context where the PCI device is being operated on anyway, we can
> assume it can't go away. Then no further locking would be needed here. Or
> we want to explicitly guard against that, in which case (seeing that
> nothing is added / removed), d->pci_lock may want read-locking?

In this context the caller is already holding the pcidevs lock, so
there's no risk of the device being de-assigned.

My reasoning for taking the d->pci_lock in write mode was to prevent
any concurrent access to vPCI while the changes to the emulated config
space is taken place, but you are right that the vPCI lock if used
properly should prevent races.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.