[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/2] xen/arm: exclude xen,reg from direct-map domU extended regions


  • To: "Orzel, Michal" <michal.orzel@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Wed, 4 Jun 2025 15:51:53 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JWLtwx3KVfo9VJoKchdXuplTFM20rdP1HjCFSwF2KoI=; b=lnTQB+vmkRezvleSZliVXxgAzTDmyyHZJ7Lvo1u/nFqg+wUYNFrYeWHf+Ec50cC5KecGa+TT5SDQfPZp91ba9ZmhtsS9hoI1AGqkr/0p3DD2lbMgxk1ZvRBumP2A4Z68csRJ0ejNAX2vUJHbqLOD69AohfID7OAG4jg5RHEX8mqZOAkD/FUHmt66bCNQil7H77lMJzxgAeNcP6vKcY/GRFgzGjkv2VtgIy3IUnxox/r8bNNBRv+pmdlVpOzctNY74cig5dQfcJj4OnoDeVdeCIO5oAMru/PiPB3TwsNLQv+tWwNIEas++kPs47NXSo8IgHWvjN2nfnERIQUu4qU7Bg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=UnW8Fah93vzAsppY2h7t2dA0mgPIoWT8IuBQeCWXo+qCNB7WoF1KS3jSqeUJ9Y3nW7BJFmk3tXGQWD1KknQyQaGUHVwQKQMsx9V9vvV+bYB/XcnHO6fmrdxpggkdavL9CGd2BGSkKDw/lItp4aZ5PnZsV3F9+4CeWJ2qAlF72GL/hs9IQfl+lLniiVOYaf9Q8vn0sTGVEw9nsZd7xSvtsQWH4M5bvsHHvXpd3KemGj6ed1R5OuuyA7m/HPvFyQMX/qVGAQENvu3UB4SDKkg9IlxVSSM787Zr8FjGKbMhHrMtgIJnT7vmOJAJionwWTRBzPMtiAFBdom0xhAoE130wA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, "Jan Beulich" <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 04 Jun 2025 19:52:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 6/4/25 03:00, Orzel, Michal wrote:
> On 03/06/2025 23:15, Stewart Hildebrand wrote:
>> On 5/14/25 03:31, Orzel, Michal wrote:
>>> On 13/05/2025 21:54, Stewart Hildebrand wrote:
>>>> Similarly to fba1b0974dd8, when a device is passed through to a
>>>> direct-map dom0less domU, the xen,reg ranges may overlap with the
>>>> extended regions. Remove xen,reg from direct-map domU extended regions.
>>>>
>>>> Introduce rangeset_count_ranges().
>>>>
>>>> Take the opportunity to update the comment ahead of find_memory_holes().
>>>>
>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
>>>> ---
>>>> v2->v3:
>>>> * new patch
>>>> ---
>>>>  xen/arch/arm/domain_build.c | 57 +++++++++++++++++++++++++++++++++----
>>>>  xen/common/rangeset.c       | 14 +++++++++
>>>>  xen/include/xen/rangeset.h  |  2 ++
>>>>  3 files changed, 68 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index b189a7cfae9f..3cdf5839bc98 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -824,15 +824,17 @@ static int __init handle_pci_range(const struct 
>>>> dt_device_node *dev,
>>>>  }
>>>>  
>>>>  /*
>>>> - * Find the holes in the Host DT which can be exposed to Dom0 as extended
>>>> - * regions for the special memory mappings. In order to calculate regions
>>>> - * we exclude every addressable memory region described by "reg" and 
>>>> "ranges"
>>>> - * properties from the maximum possible addressable physical memory range:
>>>> + * Find the holes in the Host DT which can be exposed to Dom0 or a 
>>>> direct-map
>>>> + * domU as extended regions for the special memory mappings. In order to
>>>> + * calculate regions we exclude every addressable memory region described 
>>>> by
>>>> + * "reg" and "ranges" properties from the maximum possible addressable 
>>>> physical
>>>> + * memory range:
>>>>   * - MMIO
>>>>   * - Host RAM
>>>>   * - PCI aperture
>>>>   * - Static shared memory regions, which are described by special property
>>>>   *   "xen,shared-mem"
>>>> + * - xen,reg mappings
>>>>   */
>>>>  static int __init find_memory_holes(const struct kernel_info *kinfo,
>>>>                                      struct membanks *ext_regions)
>>>> @@ -914,6 +916,13 @@ static int __init find_memory_holes(const struct 
>>>> kernel_info *kinfo,
>>>>          }
>>>>      }
>>>>  
>>>> +    if ( kinfo->xen_reg_assigned )
>>>> +    {
>>>> +        res = rangeset_subtract(mem_holes, kinfo->xen_reg_assigned);
>>>> +        if ( res )
>>>> +            goto out;
>>>> +    }
>>>> +
>>>>      start = 0;
>>>>      end = (1ULL << p2m_ipa_bits) - 1;
>>>>      res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), 
>>>> PFN_DOWN(end),
>>>> @@ -994,11 +1003,30 @@ static int __init find_domU_holes(const struct 
>>>> kernel_info *kinfo,
>>>>      return res;
>>>>  }
>>>>  
>>>> +static int __init rangeset_to_membank(unsigned long s_gfn, unsigned long 
>>>> e_gfn,
>>>> +                                      void *data)
>>>> +{
>>>> +    struct membanks *membank = data;
>>>> +    paddr_t s = pfn_to_paddr(s_gfn);
>>>> +    paddr_t e = pfn_to_paddr(e_gfn + 1) - 1;
>>>> +
>>>> +    if ( membank->nr_banks >= membank->max_banks )
>>>> +        return 0;
>>>> +
>>>> +    membank->bank[membank->nr_banks].start = s;
>>>> +    membank->bank[membank->nr_banks].size = e - s + 1;
>>>> +    membank->nr_banks++;
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>>  static int __init find_host_extended_regions(const struct kernel_info 
>>>> *kinfo,
>>>>                                               struct membanks *ext_regions)
>>>>  {
>>>>      int res;
>>>>      struct membanks *gnttab = membanks_xzalloc(1, MEMORY);
>>>> +    struct membanks *xen_reg = membanks_xzalloc(
>>>> +        max(1, rangeset_count_ranges(kinfo->xen_reg_assigned)), MEMORY);
>>> You allocate at least 1 membank even though xen_reg_assigned may be empty 
>>> because:
>>>  - this function is called for hwdom - no xen,reg
>>>  - there may be no xen,reg i.e. no passthrough
>>
>> Ah, sorry, there's no need to allocate at least 1. This can just be:
>>
>>     struct membanks *xen_reg = membanks_xzalloc(
>>         rangeset_count_ranges(kinfo->arch.xen_reg_assigned), MEMORY);
> No, it cannot. membanks_xzalloc() calls xzalloc_flex_struct(). If you pass 0
> as size, the latter will calculate offset to FAM[0]. In other words, the
> allocation will succeed but only for members up to FAM[0] (i.e. only for 
> struct
> membanks_hdr).

If we pass 0 as the size, these members (and their ->common.*
counterparts) will be allocated:
xen_reg->nr_banks
xen_reg->max_banks
xen_reg->type

but there will not be allocated any space for the flexible array member:
xen_reg->bank[]

Since ->max_banks will be set to 0, and ->nr_banks shouldn't exceed
->max_banks, it should work. At least for the (inner) loop in
find_unallocated_memory(), when ->nr_banks is 0, it won't dereference
->bank[]. FWIW, I also tested this with UBSAN enabled.

I admit it does give me a weird feeling not allocating any space for a
member in a struct, but it's a C standard flexible array member, and the
array's size would be 0. We deviated relevant MISRA rule 18.7 in
b87697fc1a6f ("automation/eclair: fully deviate MISRA C:2012 Rules 5.7
and 18.7").

With that said, I'd be happy either way (i.e. either allocating exactly
what's returned by rangeset_count_ranges() or max(1,
rangeset_count_ranges()), but I just want to ensure we have the same
understanding on the technicalities.

> Also, even if you conditionally allocate for xen_reg_assigned or set NULL, in
> latter case you will end up with mem_banks containing NULL member. AFAICT 
> that's
> not something expected by the users of mem_banks (+ it gives unneeded 
> iteration).

Agreed, it would be a bad idea to set xen_reg = NULL (leading to a NULL
member in mem_banks), because then find_unallocated_memory() would not
be happy.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.