[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 7/8] vpci/msi: Free MSI resources when init_msi() fails
On 25.06.2025 09:16, Chen, Jiqian wrote: > On 2025/6/24 18:17, Jan Beulich wrote: >> On 24.06.2025 11:49, Chen, Jiqian wrote: >>> On 2025/6/18 22:45, Jan Beulich wrote: >>>> On 12.06.2025 11:29, Jiqian Chen wrote: >>>>> --- a/xen/drivers/vpci/msi.c >>>>> +++ b/xen/drivers/vpci/msi.c >>>>> @@ -193,6 +193,33 @@ static void cf_check mask_write( >>>>> msi->mask = val; >>>>> } >>>>> >>>>> +static int cf_check cleanup_msi(struct pci_dev *pdev) >>>>> +{ >>>>> + int rc; >>>>> + unsigned int end, size; >>>>> + struct vpci *vpci = pdev->vpci; >>>>> + const unsigned int msi_pos = pdev->msi_pos; >>>>> + const unsigned int ctrl = msi_control_reg(msi_pos); >>>>> + >>>>> + if ( !msi_pos || !vpci->msi ) >>>>> + return 0; >>>>> + >>>>> + if ( vpci->msi->masking ) >>>>> + end = msi_pending_bits_reg(msi_pos, vpci->msi->address64); >>>>> + else >>>>> + end = msi_mask_bits_reg(msi_pos, vpci->msi->address64) - 2; >>>>> + >>>>> + size = end - ctrl; >>>>> + >>>>> + rc = vpci_remove_registers(vpci, ctrl, size); >>>>> + if ( rc ) >>>>> + return rc; >>>> >>>> This is a difficult one: It's not a good idea to simply return here, yet >>>> at the same time the handling of the register we're unable to remove may >>>> still require e.g. ... >>>> >>>>> + XFREE(vpci->msi); >>>> >>>> ... this. There may therefore be more work required, such that in the >>>> end we're able to ... >>>> >>>>> + return vpci_add_register(pdev->vpci, vpci_hw_read16, NULL, ctrl, 2, >>>>> NULL); >>>> >>>> ... try this at least on a best effort basis. >>>> >>>> More generally: I don't think failure here (or in other .cleanup hook >>>> functions) may go entirely silently. >>> Does below meet your modification expectations? >> >> Not sure, sorry. By "more" I really meant "more" (which may just be code >> auditing, results of which would need writing down, but which may also >> involve further code changes; see below). >> >>> rc = vpci_remove_registers(vpci, ctrl, size); >>> if ( rc ) >>> printk(XENLOG_ERR "%pd %pp: remove msi handlers fail rc=%d\n", >>> pdev->domain, &pdev->sbdf, rc); >>> >>> XFREE(vpci->msi); >> >> As I tried to indicate in my earlier reply, the freeing of this struct is >> safe only if the failure above would not leave any register handlers in >> place which still (without appropriate checking) use this struct. > Hmm, but all handlers added in init_msi() use this struct. > So it doesn't exist the case that when above unable to remove all handlers > and still require xfree this struct. Well, in the end you say in different words what I did say, if I understand correctly. There are several options how to deal with that. One might be to have those handlers recognize the lack of that pointer, and behave like ... >>> /* >>> * The driver may not traverse the capability list and think device >>> * supports MSI by default. So here let the control register of MSI >>> * be Read-Only is to ensure MSI disabled. >>> */ >>> rc = vpci_add_register(vpci, vpci_hw_read16, NULL, ctrl, 2, NULL); ... what is tried to be put in place here (and like "no handler installed" for other registers). >> You're losing the earlier error here, if there was one. If this one >> succeeds, ... > But if return the earlier error to the caller, this device will be unusable, > then still adding this handler when above failing to remove handlers is > useless. True, yet that's the case also with your code if removing the ctrl handler failed, as then the attempt above to add another handler would also fail. I don't know what the best approach is (I did suggest one above, albeit that's not quite complete yet as to the behavior here); I merely observed that the behavior as you have it doesn't look quite right / consistent. Jan >>> if ( rc ) >>> printk(XENLOG_ERR "%pd %pp: add dummy msi ctrl handler fail >>> rc=%d\n", >>> pdev->domain, &pdev->sbdf, rc); >>> >>> return rc; >> >> ... the caller would (wrongly) get success back. >> >> Jan >
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |