[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/7] xen/arm: Wrap shared memory mapping code in one function


  • To: Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Tue, 7 May 2024 16:08:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=W5NEyJuvreuOJv1SiUQGdPNtvBALU/HTlg5HwYrgl8k=; b=Kj6fKt+zszLRz81J/64ZDl4V20NJO5Pl3VJQp/raFM9twvE3KmVOx+rS+eMspNOCn16SAozA7zIpZYuDxCAwknok2+IEhjHpBvin/vmIYIHS02nymnw/+kL31YMtQYtAYMZX/2/IPuR2G0FzOxdy2/75pxqAOiPDEAsZyr1IQt6+JfiMRMToBHL5krtraiZ4GG906dhUOQTZu7c/j51aJiYXqS92E3npg/5iZdLwm5oKCN6mDR7+eYoVyBgumW3oBIh4mR0zsZciHjfcXY5ThOAUt2s5xtauqH9H72cL7FxSFvu/7I4i2fUEJn3b2cfbYnIkaOK77uakDmoPcNIzDQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CDrCSjtDeQHKuhOOGU5mmOyPMNgADY2Tf75XodKp58x4DepGGIuAuVsJSuAxYH/wX69H53NA8idjDUnFj2AzF7a/6RJRv3ZExsttsvPQg0JvKPKIS/K4G5LqV6bED+BsoQGxqzTVWrAydf+8UtnlsnGhAm3CQ64t2CE5ePXc3sZKs3tUXiPBP5XL99HpEDVNgR9L40LJofKlHcG7xG6b5BzP0e2GoT/LAlFjpFukyNA0qaQggd9OLq0aI4rROchP8iRMIsnUx6y8e5ip74To9wSyetDBUBlTOtypLQa6MoZ7lgnqp5OTOYP65n0barw8XzkT2HkdF84CT0VhCDg4CA==
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Tue, 07 May 2024 14:08:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 07/05/2024 15:57, Luca Fancellu wrote:
> 
> 
> Hi Michal,
> 
>>>
>>> +static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase,
>>> +                                         bool owner_dom_io,
>>> +                                         const char *role_str,
>>> +                                         const struct membank *shm_bank)
>>> +{
>>> +    paddr_t pbase, psize;
>>> +    int ret;
>>> +
>>> +    BUG_ON(!shm_bank);
>> not needed
>>
>>> +
>>> +    pbase = shm_bank->start;
>>> +    psize = shm_bank->size;
>> please add empty line here
> 
> Will do
>>>
>>> int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>>                        const struct dt_device_node *node)
>>> {
>>> @@ -249,32 +290,10 @@ int __init process_shm(struct domain *d, struct 
>>> kernel_info *kinfo,
>>>         if ( dt_property_read_string(shm_node, "role", &role_str) == 0 )
>>>             owner_dom_io = false;
>> Looking at owner_dom_io, why don't you move parsing role and setting 
>> owner_dom_io accordingly to handle_shared_mem_bank()?
> 
> I think I wanted to keep all dt_* functions on the same level inside 
> process_shm, otherwise yes, I could
> pass down shm_node and do the reading of role_str in handle_shared_mem_bank, 
> or I could derive
> owner_dom_io from role_str being passed or not, something like:
> 
> role_str = NULL;
> dt_property_read_string(shm_node, "role", &role_str)
> 
> [inside handle_shared_mem_bank]:
> If ( role_str )
>     owner_dom_io = false;
> 
> And pass only role_str to handle_shared_mem_bank.
> 
> Is this comment to reduce the number of parameters passed? I guess it’s not 
> for where we call
In this series as well as the previous one you limit the number of arguments 
passed to quite a few functions.
So naturally I would expect it to be done here as well. owner_dom_io is used 
only by handle_shared_mem_bank, so it makes more sense to move parsing to this
function so that it is self-contained.

~Michal



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.