[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 1/2] xen/arm: exclude xen,reg from direct-map domU extended regions


  • To: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Wed, 11 Jun 2025 15:53:04 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VUEqhfbrTqoFATdGeSIV975frBPi2IJjMsFogOGtq40=; b=osHES38iNlP/NrHrgyDcOX9ZZ1E8Re8YCdgJptHH9hIB7BQCOK7Yz81AaJI4XxsL/RXPrszUEB4uLQ05896PMn/+gT4QIC7RLAtcpRwpTQ0xZQ9rWV4gSgEcSxmmdjowM7UX9ubVhJE5znSYUjqwIBbUM9mIkav2h2f4lJYzoEx3k4jLJlRE8yfeZY1PqQpDK/sbYxRzVGZm9svG9kDe7fTM8TogEbF5HUPz3A3A7HQCgptiqEW2bTD8+0wGVMy29H1t7pvCG2C7dux3bfU/7DHgfDBtxccWflh6HQHLhG1xd6/02bOwDDDLw6VdO35KGFvwvgm7NLUGuHWztZr8qg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=fgznPjePDqIEe8XD+M6k8Go6yT1GQ++gq2l1KzP1oN9TkzdGWi44dJKCXJG4KYdMNm93oGjqnADro6wtS8SoiZHxLjt8L1QVaVBrpSvE9fQbJtF484Wm5c4qDA/KQeZjA9M00HBNTMpxK8x4fMzLnwvo3cHi8jTqUCHaeUDQAvJvSTpmmjSAmQUVRHhgAJHOv5wTuofwW1pbfci2v7LN6FpJsfeVXjyZmNKnbE+YoOBIPvhtpqAXvLdADUquIK6tk6QRDlhqKm0F53+90elEtUkm+kqGBh1HW/yN0qgfeMT2bRDW5itNbOnasm9RIqYAHHxmFJfI+SA/f6/prwPvSw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Wed, 11 Jun 2025 13:53:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 11/06/2025 15:49, Stewart Hildebrand wrote:
> On 6/10/25 03:27, Orzel, Michal wrote:
>> On 09/06/2025 20:34, Stewart Hildebrand wrote:
>>> Similarly to fba1b0974dd8, when a device is passed through to a
>>> direct-map dom0less domU, the xen,reg ranges may overlap with the
>>> extended regions. Remove xen,reg from direct-map domU extended regions.
>>>
>>> Take the opportunity to update the comment ahead of find_memory_holes().
>>>
>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
>>> ---
>>> v3->v4:
>>> * conditionally allocate xen_reg
>>> * use rangeset_report_ranges()
>>> * make find_unallocated_memory() cope with NULL entry
>>>
>>> v2->v3:
>>> * new patch
>>> ---
>>>  xen/arch/arm/domain_build.c           | 77 +++++++++++++++++++++++++--
>>>  xen/common/device-tree/domain-build.c |  5 ++
>>>  2 files changed, 77 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 590f38e52053..6632191cf602 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -792,15 +792,17 @@ static int __init handle_pci_range(const struct 
>>> dt_device_node *dev,
>>>  }
>>>  
>>>  /*
>>> - * Find the holes in the Host DT which can be exposed to Dom0 as extended
>>> - * regions for the special memory mappings. In order to calculate regions
>>> - * we exclude every addressable memory region described by "reg" and 
>>> "ranges"
>>> - * properties from the maximum possible addressable physical memory range:
>>> + * Find the holes in the Host DT which can be exposed to Dom0 or a 
>>> direct-map
>>> + * domU as extended regions for the special memory mappings. In order to
>>> + * calculate regions we exclude every addressable memory region described 
>>> by
>>> + * "reg" and "ranges" properties from the maximum possible addressable 
>>> physical
>>> + * memory range:
>>>   * - MMIO
>>>   * - Host RAM
>>>   * - PCI aperture
>>>   * - Static shared memory regions, which are described by special property
>>>   *   "xen,shared-mem"
>>> + * - xen,reg mappings
>>>   */
>>>  static int __init find_memory_holes(const struct kernel_info *kinfo,
>>>                                      struct membanks *ext_regions)
>>> @@ -882,6 +884,13 @@ static int __init find_memory_holes(const struct 
>>> kernel_info *kinfo,
>>>          }
>>>      }
>>>  
>>> +    if ( kinfo->xen_reg_assigned )
>>> +    {
>>> +        res = rangeset_subtract(mem_holes, kinfo->xen_reg_assigned);
>>> +        if ( res )
>>> +            goto out;
>>> +    }
>>> +
>>>      start = 0;
>>>      end = (1ULL << p2m_ipa_bits) - 1;
>>>      res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), PFN_DOWN(end),
>>> @@ -962,11 +971,48 @@ static int __init find_domU_holes(const struct 
>>> kernel_info *kinfo,
>>>      return res;
>>>  }
>>>  
>>> +static int __init count(unsigned long s, unsigned long e, void *data)
>>> +{
>>> +    unsigned int *cnt = data;
>>> +
>>> +    (*cnt)++;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int __init rangeset_to_membank(unsigned long s_gfn, unsigned long 
>>> e_gfn,
>>> +                                      void *data)
>>> +{
>>> +    struct membanks *membank = data;
>>> +    paddr_t s = pfn_to_paddr(s_gfn);
>>> +    paddr_t e = pfn_to_paddr(e_gfn + 1) - 1;
>> Why do you subtract 1 here if ...
>>
>>> +
>>> +    if ( membank->nr_banks >= membank->max_banks )
>>> +        return 0;
>>> +
>>> +    membank->bank[membank->nr_banks].start = s;
>>> +    membank->bank[membank->nr_banks].size = e - s + 1;
>> you add it again here.
> 
> To be consistent with add_ext_regions() and add_hwdom_free_regions(),
> but I suppose there's no need for that. I'll drop the extraneous -1 and
> +1.
> 
>>> +    membank->nr_banks++;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>  static int __init find_host_extended_regions(const struct kernel_info 
>>> *kinfo,
>>>                                               struct membanks *ext_regions)
>>>  {
>>>      int res;
>>>      struct membanks *gnttab = membanks_xzalloc(1, MEMORY);
>>> +    struct membanks *xen_reg =
>>> +        kinfo->xen_reg_assigned
>>> +        ? ({
>>> +            unsigned int xen_reg_cnt = 0;
>>> +
>>> +            rangeset_report_ranges(kinfo->xen_reg_assigned, 0,
>>> +                                   PFN_DOWN((1ULL << p2m_ipa_bits) - 1), 
>>> count,
>>> +                                   &xen_reg_cnt);
>> This does not look really nice with ({. Why can't we create a helper 
>> function to
>> report the count for xen_reg_assigned and call it here? You could then also 
>> open
>> code your 'count' function that is not used by anything else and is quite 
>> ambiguous.
> 
> If I'm reading this right, I think you're suggesting something like this
> (in domain_build.c):
> 
> static unsigned int __init count_ranges(struct rangeset *r)
> {
>     unsigned int xen_reg_cnt = 0;
> 
>     rangeset_report_ranges(r,
>                            0,
>                            PFN_DOWN((1ULL << p2m_ipa_bits) - 1),
>                            ({
>                                int count(unsigned long s, unsigned long e, 
> void *data)
>                                {
>                                    unsigned int *cnt = data;
> 
>                                    (*cnt)++;
> 
>                                    return 0;
>                                }
>                                count;
>                            }),
>                            &xen_reg_cnt);
> 
>     return xen_reg_cnt;
> }
> 
> ...
> 
>     struct membanks *xen_reg =
>         kinfo->xen_reg_assigned
>         ? membanks_xzalloc(count_ranges(kinfo->xen_reg_assigned), MEMORY)
>         : NULL;
> 
> 
> However, the open-coded/anonymous count function, aside from being a
> compiler extension, doesn't seem to play well with __init. As written,
> this doesn't link:
Sorry, I don't know why I wrote to open code count(). In conclusion my remark
was to place this code in a separate function to avoid ({ in
find_host_extended_regions(). So there will be count() helper and 
count_ranges().

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.