[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v11 4/5] vpci/msi: Implement cleanup function for MSI
On Fri, Aug 08, 2025 at 04:03:36PM +0800, Jiqian Chen wrote: > When MSI initialization fails, vPCI hides the capability, but > removing handlers and datas won't be performed until the device is > deassigned. So, implement MSI cleanup hook that will be called to > cleanup MSI related handlers and free it's associated data when > initialization fails. > > Since cleanup function of MSI is implemented, delete the open-code > in vpci_deassign_device(). > > Signed-off-by: Jiqian Chen <Jiqian.Chen@xxxxxxx> > --- > cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx> > --- > v10->v11 changes: > * Add hide paratemer to cleanup_msi(). > * Check hide, if false return directly instead of letting ctrl RO. > * Delete xfree(pdev->vpci->msi); in vpci_deassign_device(). > * Remove Roger's Reviewed-by since patch change. > > v9->v10 changes: > No. > > v8->v9 changes: > * Add Roger's Reviewed-by. > > v7->v8 changes: > * Add a comment to describe why "-2" in cleanup_msi(). > * Given the code in vpci_remove_registers() an error in the removal of > registers would likely imply memory corruption, at which point it's > best to fully disable the device. So, Rollback the last two modifications > of v7. > > v6->v7 changes: > * Change the pointer parameter of cleanup_msi() to be const. > * When vpci_remove_registers() in cleanup_msi() fails, not to return > directly, instead try to free msi and re-add ctrl handler. > * Pass pdev->vpci into vpci_add_register() instead of pdev->vpci->msi in > init_msi() since we need that every handler realize that msi is NULL > when msi is free but handlers are still in there. > > v5->v6 changes: > No. > > v4->v5 changes: > * Change definition "static void cleanup_msi" to "static int cf_check > cleanup_msi" > since cleanup hook is changed to be int. > * Add a read-only register for MSI Control Register in the end of cleanup_msi. > > v3->v4 changes: > * Change function name from fini_msi() to cleanup_msi(). > * Remove unnecessary comment. > * Change to use XFREE to free vpci->msi. > > v2->v3 changes: > * Remove all fail path, and use fini_msi() hook instead. > * Change the method to calculating the size of msi registers. > > v1->v2 changes: > * Added a new function fini_msi to free all MSI resources instead of using an > array > to record registered registers. > > Best regards, > Jiqian Chen. > --- > xen/drivers/vpci/msi.c | 49 ++++++++++++++++++++++++++++++++++++++++- > xen/drivers/vpci/vpci.c | 1 - > 2 files changed, 48 insertions(+), 2 deletions(-) > > diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c > index c3eba4e14870..6ab45b9ba655 100644 > --- a/xen/drivers/vpci/msi.c > +++ b/xen/drivers/vpci/msi.c > @@ -193,6 +193,53 @@ static void cf_check mask_write( > msi->mask = val; > } > > +static int cf_check cleanup_msi(const struct pci_dev *pdev, bool hide) > +{ > + int rc; > + unsigned int end; > + struct vpci *vpci = pdev->vpci; > + const unsigned int msi_pos = pdev->msi_pos; > + const unsigned int ctrl = msi_control_reg(msi_pos); > + > + if ( !msi_pos || !vpci->msi ) > + return 0; > + > + if ( vpci->msi->masking ) > + end = msi_pending_bits_reg(msi_pos, vpci->msi->address64); > + else > + /* > + * "-2" here is to cut the reserved 2 bytes of Message Data when > + * there is no masking support. > + */ > + end = msi_mask_bits_reg(msi_pos, vpci->msi->address64) - 2; > + > + rc = vpci_remove_registers(vpci, ctrl, end - ctrl); > + if ( rc ) > + { > + printk(XENLOG_ERR "%pd %pp: fail to remove MSI handlers rc=%d\n", > + pdev->domain, &pdev->sbdf, rc); > + ASSERT_UNREACHABLE(); > + return rc; > + } > + > + XFREE(vpci->msi); > + > + if ( !hide ) > + return 0; Kind of the same comment as in the previous patch, for the non-hide case there's likely no reason to perform the removal of the registers, as the caller will take care of that. The function could be short-circuited earlier as: if ( !hide ) { XFREE(vpci->msi); return 0; } Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |