|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v9 8/8] vpci/msix: Free MSIX resources when init_msix() fails
On 2025/7/30 00:36, Roger Pau Monné wrote:
> On Mon, Jul 28, 2025 at 01:04:01PM +0800, Jiqian Chen wrote:
>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>> index 4b8e6b28bd07..258356019535 100644
>> --- a/xen/drivers/vpci/vpci.c
>> +++ b/xen/drivers/vpci/vpci.c
>> @@ -321,6 +321,14 @@ void vpci_deassign_device(struct pci_dev *pdev)
>> &pdev->domain->vpci_dev_assigned_map);
>> #endif
>>
>> + if ( pdev->vpci->msix )
>> + {
>> + int rc = cleanup_msix(pdev);
>> + if ( rc )
>> + printk(XENLOG_ERR "%pd %pp: fail to cleanup MSIX datas rc=%d\n",
>> + pdev->domain, &pdev->sbdf, rc);
>> + }
>> +
>> spin_lock(&pdev->vpci->lock);
>> while ( !list_empty(&pdev->vpci->handlers) )
>> {
>> @@ -332,18 +340,10 @@ void vpci_deassign_device(struct pci_dev *pdev)
>> xfree(r);
>> }
>> spin_unlock(&pdev->vpci->lock);
>> - if ( pdev->vpci->msix )
>> - {
>> - list_del(&pdev->vpci->msix->next);
>> - for ( i = 0; i < ARRAY_SIZE(pdev->vpci->msix->table); i++ )
>> - if ( pdev->vpci->msix->table[i] )
>> - iounmap(pdev->vpci->msix->table[i]);
>> - }
>>
>> for ( i = 0; i < ARRAY_SIZE(pdev->vpci->header.bars); i++ )
>> rangeset_destroy(pdev->vpci->header.bars[i].mem);
>>
>> - xfree(pdev->vpci->msix);
>
> Oh, I'm afraid this is not what I was expecting. You should call all
> the cleanup hooks here, so that you can also remove the vpci->msi
> xfree() (and any future ones). You want a loop over the array of
> registered vpci_capability_t and call all the defined cleanup()
> methods against the deassigned device IMO.
Oh, sorry to misunderstand.
Will change.
>
> That avoids having to reference any specific capability here, and new
> capabilities will only need to implement a cleanup handler without
> having to modify vpci_deassign_device(). You won't need to export
> cleanup_msix() either, which is ugly.
>
> Thanks, Roger.
--
Best regards,
Jiqian Chen.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |