[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/7] xen/arm: Wrap shared memory mapping code in one function


  • To: Luca Fancellu <luca.fancellu@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Thu, 16 May 2024 15:19:58 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HQNxe9XW3c06hYbC2BOFWXBtWoD/EIj0yaL3VUHYHVM=; b=JqNJlkE3zoDKL7q6/zr4cls9nUtoE3zPZrEoAZaUqw1/glsw3N7+VUv5PypJx+GfHGLbZ3nPXQKZ7iwPBL8I1gq7CQRv3KSWIpS3UXelB4PDNzdJ/19fVc6+V6T5+a0qYp+S4YMWX4dkdAn/DQ+kZhPUnNs7ekIP7MJ9RNNyIvkpY/ZxTUdBynSps/FCZg7siLW55jdpSPv9Vj2wbmVrPL5F3o5ZRN8ayL4nRjZdUu62NNOEQ8KFDXwG4WC35QHI3UHA9ZuGsKJGGerWOcsVS0oLO2jkeQapipiw6Fk2ZT/sdhM8ESe2ifCjO1PKKEmt5dn/JJbr5qeDn1HblR1NoA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YweDm7nwuT1PqW36+eplneGvOibIh+WnWhiccyWRsrKEI3gfdzKYPmLQLSl1Tsa/K8Aci61wbVm7824isAu1L6gnnRuJZmUaXb7L0jQ6pQNJHj3A6L3Tq8PoNCdc1hsNVcZdg0IIn63c8hOwMrWYc0edQ8W/HF1ISFf+Eccxv5syj44A8/H/s/9ghPZq0oWlNZ+B5X/d/FcdySD6SJAaBROKxjaeaHwfKV1c6kHVPOL3JiIGBF1bkeXRDykIcLr+CfJCrFF4Fp6nMAKmGJWRICvsjeWI7SzgMxJdmFjk180NpuMd17PXxszOmllRORrAVotFvzF7rvL4YMrMrAB1Dg==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 16 May 2024 13:20:20 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Luca,

On 15/05/2024 16:26, Luca Fancellu wrote:
> 
> 
> Wrap the code and logic that is calling assign_shared_memory
> and map_regions_p2mt into a new function 'handle_shared_mem_bank',
> it will become useful later when the code will allow the user to
> don't pass the host physical address.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@xxxxxxx>
> ---
> v2 changes:
>  - add blank line, move owner_dom_io computation inside
>    handle_shared_mem_bank in order to reduce args count, remove
>    not needed BUGON(). (Michal)
> ---
>  xen/arch/arm/static-shmem.c | 87 ++++++++++++++++++++++---------------
>  1 file changed, 53 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
> index 0afc86c43f85..8a14d120690c 100644
> --- a/xen/arch/arm/static-shmem.c
> +++ b/xen/arch/arm/static-shmem.c
> @@ -181,6 +181,53 @@ append_shm_bank_to_domain(struct kernel_info *kinfo, 
> paddr_t start,
>      return 0;
>  }
> 
> +static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase,
> +                                         const char *role_str,
> +                                         const struct membank *shm_bank)
> +{
> +    bool owner_dom_io = true;
> +    paddr_t pbase, psize;
> +    int ret;
> +
> +    pbase = shm_bank->start;
> +    psize = shm_bank->size;
> +
> +    /*
> +     * "role" property is optional and if it is defined explicitly,
> +     * then the owner domain is not the default "dom_io" domain.
> +     */
> +    if ( role_str != NULL )
> +        owner_dom_io = false;
> +
> +    /*
> +     * DOMID_IO is a fake domain and is not described in the Device-Tree.
> +     * Therefore when the owner of the shared region is DOMID_IO, we will
> +     * only find the borrowers.
> +     */
> +    if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) ||
> +         (!owner_dom_io && strcmp(role_str, "owner") == 0) )
> +    {
> +        /*
> +         * We found the first borrower of the region, the owner was not
> +         * specified, so they should be assigned to dom_io.
> +         */
> +        ret = assign_shared_memory(owner_dom_io ? dom_io : d, gbase, 
> shm_bank);
> +        if ( ret )
> +            return ret;
> +    }
> +
> +    if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) )
> +    {
> +        /* Set up P2M foreign mapping for borrower domain. */
> +        ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize),
> +                               _mfn(PFN_UP(pbase)), p2m_map_foreign_rw);
> +        if ( ret )
> +            return ret;
> +    }
> +
> +    return 0;
> +}
> +
>  int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>                         const struct dt_device_node *node)
>  {
> @@ -195,9 +242,8 @@ int __init process_shm(struct domain *d, struct 
> kernel_info *kinfo,
>          paddr_t gbase, pbase, psize;
>          int ret = 0;
>          unsigned int i;
> -        const char *role_str;
> +        const char *role_str = NULL;
>          const char *shm_id;
> -        bool owner_dom_io = true;
> 
>          if ( !dt_device_is_compatible(shm_node, 
> "xen,domain-shared-memory-v1") )
>              continue;
> @@ -238,39 +284,12 @@ int __init process_shm(struct domain *d, struct 
> kernel_info *kinfo,
>                  return -EINVAL;
>              }
> 
> -        /*
> -         * "role" property is optional and if it is defined explicitly,
> -         * then the owner domain is not the default "dom_io" domain.
> -         */
> -        if ( dt_property_read_string(shm_node, "role", &role_str) == 0 )
> -            owner_dom_io = false;
> +        /* "role" property is optional */
> +        dt_property_read_string(shm_node, "role", &role_str);
This now violates a MISRA rule saying that if a function returns a value, this 
value needs to be checked.
I think you should check if return value is non zero and if such, assign 
role_str NULL (thus removing such
assignment from a definition).

Other than that:
Reviewed-by: Michal Orzel <michal.orzel@xxxxxxx>

~Michal



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.