[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/2] xen/arm: exclude xen,reg from direct-map domU extended regions


  • To: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Thu, 5 Jun 2025 08:45:40 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DwMKXWFlV1yC73Rj+zORFY+lz6V4Qnpj1arWCCT33jI=; b=QAoYS8cXsignGd9K7UgDlND7xKlURnU6S+a4ZzlorOIxQc7ZTTf2myMq0dEmvwMDZI4SjINo5aq0w0TPmx1B0DsqTLGqBX1+CTsz3OVi+FhRsWbj6zcmUWH17oV/rGEJ6fJfjVEjZYsiUPJv8GGxf+uaoU8APwfL8E5CtYAiCi1oBFqR2OgjRVH8w5WVe/vEwiXw3rDdVkvemobVmlkqYjzN4HYBgXbonkJfWMOeFy009qyEXsWoYVtf2ss0ZQBzWRiYmzvWVl0CjWs34YESzR8zg/z9Fl8UkXBwDtTVRVXCFEQHyl8Cn6xrFDyh+9k9Qkxguc5Hzo3xZObUOqxkOg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LBygWqfiULxUVkKoYC6sOKPxajfdRl5x3dNOJwvnSk+uatielJe1GMmm93gQJg5Meqxibmv4WMByJuvBloD67QWP29Uc37wHNGZAyQKbM0MFuw1QcySo8G2DvPWYZUmmGHJU7g4f7EiCNdyv9xV7NgCoOG9A/pjXHZTr7ex0mUzpnzc1TOOwyVbUXfmiAsRZGwX7MSOk/T2Hf+xSggbVB/w/i4AR8Rsjltad8dSr08Ekp1s/WwDfZ2HTyC+0GYfClA8oqzC4blN40G/RmObntNkA0f4hg2949PWezqYXD2/z3CUIxcHz4A7ww0K2xO+szlOIDRrwlXptDgvmuT/i+w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 05 Jun 2025 06:45:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 04/06/2025 21:51, Stewart Hildebrand wrote:
> On 6/4/25 03:00, Orzel, Michal wrote:
>> On 03/06/2025 23:15, Stewart Hildebrand wrote:
>>> On 5/14/25 03:31, Orzel, Michal wrote:
>>>> On 13/05/2025 21:54, Stewart Hildebrand wrote:
>>>>> Similarly to fba1b0974dd8, when a device is passed through to a
>>>>> direct-map dom0less domU, the xen,reg ranges may overlap with the
>>>>> extended regions. Remove xen,reg from direct-map domU extended regions.
>>>>>
>>>>> Introduce rangeset_count_ranges().
>>>>>
>>>>> Take the opportunity to update the comment ahead of find_memory_holes().
>>>>>
>>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
>>>>> ---
>>>>> v2->v3:
>>>>> * new patch
>>>>> ---
>>>>>  xen/arch/arm/domain_build.c | 57 +++++++++++++++++++++++++++++++++----
>>>>>  xen/common/rangeset.c       | 14 +++++++++
>>>>>  xen/include/xen/rangeset.h  |  2 ++
>>>>>  3 files changed, 68 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>>> index b189a7cfae9f..3cdf5839bc98 100644
>>>>> --- a/xen/arch/arm/domain_build.c
>>>>> +++ b/xen/arch/arm/domain_build.c
>>>>> @@ -824,15 +824,17 @@ static int __init handle_pci_range(const struct 
>>>>> dt_device_node *dev,
>>>>>  }
>>>>>  
>>>>>  /*
>>>>> - * Find the holes in the Host DT which can be exposed to Dom0 as extended
>>>>> - * regions for the special memory mappings. In order to calculate regions
>>>>> - * we exclude every addressable memory region described by "reg" and 
>>>>> "ranges"
>>>>> - * properties from the maximum possible addressable physical memory 
>>>>> range:
>>>>> + * Find the holes in the Host DT which can be exposed to Dom0 or a 
>>>>> direct-map
>>>>> + * domU as extended regions for the special memory mappings. In order to
>>>>> + * calculate regions we exclude every addressable memory region 
>>>>> described by
>>>>> + * "reg" and "ranges" properties from the maximum possible addressable 
>>>>> physical
>>>>> + * memory range:
>>>>>   * - MMIO
>>>>>   * - Host RAM
>>>>>   * - PCI aperture
>>>>>   * - Static shared memory regions, which are described by special 
>>>>> property
>>>>>   *   "xen,shared-mem"
>>>>> + * - xen,reg mappings
>>>>>   */
>>>>>  static int __init find_memory_holes(const struct kernel_info *kinfo,
>>>>>                                      struct membanks *ext_regions)
>>>>> @@ -914,6 +916,13 @@ static int __init find_memory_holes(const struct 
>>>>> kernel_info *kinfo,
>>>>>          }
>>>>>      }
>>>>>  
>>>>> +    if ( kinfo->xen_reg_assigned )
>>>>> +    {
>>>>> +        res = rangeset_subtract(mem_holes, kinfo->xen_reg_assigned);
>>>>> +        if ( res )
>>>>> +            goto out;
>>>>> +    }
>>>>> +
>>>>>      start = 0;
>>>>>      end = (1ULL << p2m_ipa_bits) - 1;
>>>>>      res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), 
>>>>> PFN_DOWN(end),
>>>>> @@ -994,11 +1003,30 @@ static int __init find_domU_holes(const struct 
>>>>> kernel_info *kinfo,
>>>>>      return res;
>>>>>  }
>>>>>  
>>>>> +static int __init rangeset_to_membank(unsigned long s_gfn, unsigned long 
>>>>> e_gfn,
>>>>> +                                      void *data)
>>>>> +{
>>>>> +    struct membanks *membank = data;
>>>>> +    paddr_t s = pfn_to_paddr(s_gfn);
>>>>> +    paddr_t e = pfn_to_paddr(e_gfn + 1) - 1;
>>>>> +
>>>>> +    if ( membank->nr_banks >= membank->max_banks )
>>>>> +        return 0;
>>>>> +
>>>>> +    membank->bank[membank->nr_banks].start = s;
>>>>> +    membank->bank[membank->nr_banks].size = e - s + 1;
>>>>> +    membank->nr_banks++;
>>>>> +
>>>>> +    return 0;
>>>>> +}
>>>>> +
>>>>>  static int __init find_host_extended_regions(const struct kernel_info 
>>>>> *kinfo,
>>>>>                                               struct membanks 
>>>>> *ext_regions)
>>>>>  {
>>>>>      int res;
>>>>>      struct membanks *gnttab = membanks_xzalloc(1, MEMORY);
>>>>> +    struct membanks *xen_reg = membanks_xzalloc(
>>>>> +        max(1, rangeset_count_ranges(kinfo->xen_reg_assigned)), MEMORY);
>>>> You allocate at least 1 membank even though xen_reg_assigned may be empty 
>>>> because:
>>>>  - this function is called for hwdom - no xen,reg
>>>>  - there may be no xen,reg i.e. no passthrough
>>>
>>> Ah, sorry, there's no need to allocate at least 1. This can just be:
>>>
>>>     struct membanks *xen_reg = membanks_xzalloc(
>>>         rangeset_count_ranges(kinfo->arch.xen_reg_assigned), MEMORY);
>> No, it cannot. membanks_xzalloc() calls xzalloc_flex_struct(). If you pass 0
>> as size, the latter will calculate offset to FAM[0]. In other words, the
>> allocation will succeed but only for members up to FAM[0] (i.e. only for 
>> struct
>> membanks_hdr).
> 
> If we pass 0 as the size, these members (and their ->common.*
> counterparts) will be allocated:
> xen_reg->nr_banks
> xen_reg->max_banks
> xen_reg->type
> 
> but there will not be allocated any space for the flexible array member:
> xen_reg->bank[]
> 
> Since ->max_banks will be set to 0, and ->nr_banks shouldn't exceed
> ->max_banks, it should work. At least for the (inner) loop in
> find_unallocated_memory(), when ->nr_banks is 0, it won't dereference
> ->bank[]. FWIW, I also tested this with UBSAN enabled.
> 
> I admit it does give me a weird feeling not allocating any space for a
> member in a struct, but it's a C standard flexible array member, and the
> array's size would be 0. We deviated relevant MISRA rule 18.7 in
> b87697fc1a6f ("automation/eclair: fully deviate MISRA C:2012 Rules 5.7
> and 18.7").
> 
> With that said, I'd be happy either way (i.e. either allocating exactly
> what's returned by rangeset_count_ranges() or max(1,
> rangeset_count_ranges()), but I just want to ensure we have the same
> understanding on the technicalities.
Hmm, why do you want to allocate memory in the first place? If xen_reg_assigned
is NULL, we should not allocate anything. Instead you suggest to allocate either
full structure or part of it. That's where I disagree.

> 
>> Also, even if you conditionally allocate for xen_reg_assigned or set NULL, in
>> latter case you will end up with mem_banks containing NULL member. AFAICT 
>> that's
>> not something expected by the users of mem_banks (+ it gives unneeded 
>> iteration).
> 
> Agreed, it would be a bad idea to set xen_reg = NULL (leading to a NULL
> member in mem_banks), because then find_unallocated_memory() would not
> be happy.

~Michal





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.