 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v8 11/13] vpci: add initial support for virtual PCI bus topology
 
Hi Jan,
Jan Beulich <jbeulich@xxxxxxxx> writes:
> On 20.07.2023 02:32, Volodymyr Babchuk wrote:
>> --- a/xen/drivers/vpci/vpci.c
>> +++ b/xen/drivers/vpci/vpci.c
>> @@ -46,6 +46,16 @@ void vpci_remove_device(struct pci_dev *pdev)
>>          return;
>>  
>>      spin_lock(&pdev->vpci->lock);
>> +
>> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>> +    if ( pdev->vpci->guest_sbdf.sbdf != ~0 )
>> +    {
>> +        __clear_bit(pdev->vpci->guest_sbdf.dev,
>> +                    &pdev->domain->vpci_dev_assigned_map);
>> +        pdev->vpci->guest_sbdf.sbdf = ~0;
>> +    }
>> +#endif
>
> The lock acquired above is not ...
vpci_remove_device() is called when d->pci_lock is already held.
But, I'll move this hunk before spin_lock(&pdev->vpci->lock); we don't
need to hold it while cleaning vpci_dev_assigned_map
>> @@ -115,6 +129,54 @@ int vpci_add_handlers(struct pci_dev *pdev)
>>  }
>>  
>>  #ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>> +static int add_virtual_device(struct pci_dev *pdev)
>> +{
>> +    struct domain *d = pdev->domain;
>> +    pci_sbdf_t sbdf = { 0 };
>> +    unsigned long new_dev_number;
>> +
>> +    if ( is_hardware_domain(d) )
>> +        return 0;
>> +
>> +    ASSERT(pcidevs_locked());
>> +
>> +    /*
>> +     * Each PCI bus supports 32 devices/slots at max or up to 256 when
>> +     * there are multi-function ones which are not yet supported.
>> +     */
>> +    if ( pdev->info.is_extfn )
>> +    {
>> +        gdprintk(XENLOG_ERR, "%pp: only function 0 passthrough supported\n",
>> +                 &pdev->sbdf);
>> +        return -EOPNOTSUPP;
>> +    }
>> +
>> +    write_lock(&pdev->domain->pci_lock);
>> +    new_dev_number = find_first_zero_bit(d->vpci_dev_assigned_map,
>> +                                         VPCI_MAX_VIRT_DEV);
>> +    if ( new_dev_number >= VPCI_MAX_VIRT_DEV )
>> +    {
>> +        write_unlock(&pdev->domain->pci_lock);
>> +        return -ENOSPC;
>> +    }
>> +
>> +    __set_bit(new_dev_number, &d->vpci_dev_assigned_map);
>
> ... the same as the one held here, so the bitmap still isn't properly
> protected afaics, unless the intention is to continue to rely on
> the global PCI lock (assuming that one's held in both cases, which I
> didn't check it is). Conversely it looks like the vPCI lock isn't
> held here. Both aspects may be intentional, but the locks being
> acquired differing requires suitable code comments imo.
As I stated above, vpci_remove_device() is called when d->pci_lock is
already held.
> I've also briefly looked at patch 1, and I'm afraid that still lacks
> commentary about intended lock nesting. That might be relevant here
> in case locking visible from patch / patch context isn't providing
> the full picture.
>
There is
    ASSERT(rw_is_write_locked(&pdev->domain->pci_lock));
at the beginning of vpci_remove_device(), which is added by
"vpci: use per-domain PCI lock to protect vpci structure".
I believe, it will be more beneficial to review series from the
beginning.
>> +    /*
>> +     * Both segment and bus number are 0:
>> +     *  - we emulate a single host bridge for the guest, e.g. segment 0
>> +     *  - with bus 0 the virtual devices are seen as embedded
>> +     *    endpoints behind the root complex
>> +     *
>> +     * TODO: add support for multi-function devices.
>> +     */
>> +    sbdf.devfn = PCI_DEVFN(new_dev_number, 0);
>> +    pdev->vpci->guest_sbdf = sbdf;
>> +    write_unlock(&pdev->domain->pci_lock);
>
> With the above I also wonder whether this lock can't (and hence
> should) be dropped a little earlier (right after fiddling with the
> bitmap).
This is the good observation, thanks.
-- 
WBR, Volodymyr
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |