[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 3/3] vpci/msi: Remove registers when init_msi() fails


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Date: Mon, 31 Mar 2025 09:43:11 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jeZ+Txou6XM64CbE1kf0KZZnD4drDcm6FvhOFTB4RjI=; b=CnIkHvzKhYAljYthjjGCeLuV8ujx004pGEHhB+cgZv8kgYG7b8+SMChWFL1gk4aGrSH6ky23Up9ZveSZRL5C7JoohTRLO93VzwcJTADKMMgrsZHuyhk4JvG8Uf49juGC/4+vkk+w3yY8C5vgcFSIbv+H1tpNXa38LeFd7mWj/zDmofPky0J23V5zgrHGOCEY1iBiWDiKR6XOmN7yqjC2xJDhifpAR75yByUtVvEdqKEGX1jledPoGlXxrSZdvqsqUITINdeGqi1a1Hk9rF53QDOaFhb7z84wKmKMjrNHjQSQ2DII5C7SSPJp94LC1AoeHUJyWjMgDJbDXd9TMfPVPg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Ij3O87n2ApsfXv408FjSvlzg480KHWDFVUQYW3Z416Y6G72FGByF7liY8dGH7ygO3jghqB7MjhU6uUdDmChfS5PHEb6ux8nf0wELKwV9mVo3/kzTMgDPcgzZEoL6Ub+ZzXU6h8FVC+xGjDnqNsk6D3TsojTLYFk8tAq/atrm5KVkucYYcQzt+FKJlVzxqqEy7fMGQE01Tsqb/vYoZf4ITsInn0NHGe+5yc59pdRBf5rqwaKV/Elt7k2Zb88QXBmCQBOW9uH95oEZ6n3DlP+VEOcBbmkiKXy71uFAKVc0xfTRRh/KBjLvxTfNJugozKSQxzF83btipIS/d7cy+v2hHg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Mon, 31 Mar 2025 09:43:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHbnup+4g7LdxtjbkeTZtpJleytN7OG7gUAgAZ9hYD//4t/AIAAkUuA
  • Thread-topic: [PATCH v1 3/3] vpci/msi: Remove registers when init_msi() fails

On 2025/3/31 16:53, Roger Pau Monné wrote:
> On Mon, Mar 31, 2025 at 08:13:50AM +0000, Chen, Jiqian wrote:
>> On 2025/3/27 20:44, Roger Pau Monné wrote:
>>> On Thu, Mar 27, 2025 at 03:32:14PM +0800, Jiqian Chen wrote:
>>>> When init_msi() fails, the new codes will try to hide MSI
>>>> capability, so it can't rely on vpci_deassign_device() to
>>>> remove all MSI related registers anymore, those registers
>>>> must be cleaned up in failure path of init_msi.
>>>>
>>>> To do that, use a vpci_register array to record all MSI
>>>> registers and call vpci_remove_register() to remove registers.
>>>
>>> As I'm just seeing 3 patches on the series, isn't there one missing
>>> for MSI-X at least?
>> No, because init_msix only call vpci_add_register once, there is no need to 
>> remove registers when it fails.
> 
> Seems a bit fragile, what about if there's further code added to
> init_msix() that could return an error after the vpci_add_register()
> call?  It would be safer to have a cleanup function that removes the
> config space handlers, plus the MMIO one (see the call to
> register_mmio_handler()), and the addition to the
> d->arch.hvm.msix_tables list.
I am only talking about the current implementation of init_msix(), which does 
not need a cleanup function.
But if you want such a function, even if it is not needed now, I will add it in 
the next version.

> 
>>>
>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@xxxxxxx>
>>>>  
>>>>      if ( pdev->vpci->msi->masking )
>>>>      {
>>>> +        offset = msi_mask_bits_reg(pos, pdev->vpci->msi->address64);
>>>>          ret = vpci_add_register(pdev->vpci, mask_read, mask_write,
>>>> -                                msi_mask_bits_reg(pos,
>>>> -                                                  
>>>> pdev->vpci->msi->address64),
>>>> -                                4, pdev->vpci->msi);
>>>> +                                offset, 4, pdev->vpci->msi);
>>>>          if ( ret )
>>>> -            return ret;
>>>> +            goto fail;
>>>> +        registers[reg_index].offset = offset;
>>>> +        registers[reg_index++].size = 4;
>>>
>>> As commented on the previous patch, I don't like much the usage of
>>> this registers array to store which handlers have been added.  It
>>> would be best if you just removed every possible handler that could be
>>> added, without keeping track of them.
>> Make sense, it will indeed be simpler if it is fine to remove all possible 
>> registers.
>>
>>>
>>> Thinking about it, do we maybe need a helper vcpi function that wipes
>>> all handlers from a specific range?  I think it could be helpful here,
>>> as the size of the capabilities is well-known, so on error it would be
>>> easier to just call such a generic handler to wipe all hooks added to
>>> the region covered by the failing capability.
>> But I am not sure if that helper function is suitable for all capabilities.
>> Like Rebar, its structure can range from 12 bytes long(for a single BAR) to 
>> 52 bytes long(for all six BARs).
>> If a device supports Rebar and only has a single resizable BAR, does 
>> hardware still reserved the range from 13 bytes to 52 bytes for that device?
> 
> No, we would need to fetch the size of the capability in the cleanup
> function, or figure it otherwise.  Note the same applies to MSI
> capability, which has a variable size depending on whether 64bit
> addresses and masking is supported.
Got it, I originally thought you wanted a general helper function.
But it seems the case is each capability would have its own separate cleanup 
function instead.

> 
>> I mean if I remove the registers(with range 13 to 52), is it possible that I 
>> deleted registers belonging to other abilities?
> 
> It is, indeed.  You need to know or calculate the size of the
> capability to be removed, but that's likely easier and more robust
> that keeping an array with the list of added registers?
Right.

> 
> Thanks, Roger.

-- 
Best regards,
Jiqian Chen.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.