|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 1/2] xen/arm: exclude xen,reg from direct-map domU extended regions
On 6/5/25 02:45, Orzel, Michal wrote:
> On 04/06/2025 21:51, Stewart Hildebrand wrote:
>> On 6/4/25 03:00, Orzel, Michal wrote:
>>> On 03/06/2025 23:15, Stewart Hildebrand wrote:
>>>> On 5/14/25 03:31, Orzel, Michal wrote:
>>>>> On 13/05/2025 21:54, Stewart Hildebrand wrote:
>>>>>> Similarly to fba1b0974dd8, when a device is passed through to a
>>>>>> direct-map dom0less domU, the xen,reg ranges may overlap with the
>>>>>> extended regions. Remove xen,reg from direct-map domU extended regions.
>>>>>>
>>>>>> Introduce rangeset_count_ranges().
>>>>>>
>>>>>> Take the opportunity to update the comment ahead of find_memory_holes().
>>>>>>
>>>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
>>>>>> ---
>>>>>> v2->v3:
>>>>>> * new patch
>>>>>> ---
>>>>>> xen/arch/arm/domain_build.c | 57 +++++++++++++++++++++++++++++++++----
>>>>>> xen/common/rangeset.c | 14 +++++++++
>>>>>> xen/include/xen/rangeset.h | 2 ++
>>>>>> 3 files changed, 68 insertions(+), 5 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>>>> index b189a7cfae9f..3cdf5839bc98 100644
>>>>>> --- a/xen/arch/arm/domain_build.c
>>>>>> +++ b/xen/arch/arm/domain_build.c
>>>>>> @@ -824,15 +824,17 @@ static int __init handle_pci_range(const struct
>>>>>> dt_device_node *dev,
>>>>>> }
>>>>>>
>>>>>> /*
>>>>>> - * Find the holes in the Host DT which can be exposed to Dom0 as
>>>>>> extended
>>>>>> - * regions for the special memory mappings. In order to calculate
>>>>>> regions
>>>>>> - * we exclude every addressable memory region described by "reg" and
>>>>>> "ranges"
>>>>>> - * properties from the maximum possible addressable physical memory
>>>>>> range:
>>>>>> + * Find the holes in the Host DT which can be exposed to Dom0 or a
>>>>>> direct-map
>>>>>> + * domU as extended regions for the special memory mappings. In order to
>>>>>> + * calculate regions we exclude every addressable memory region
>>>>>> described by
>>>>>> + * "reg" and "ranges" properties from the maximum possible addressable
>>>>>> physical
>>>>>> + * memory range:
>>>>>> * - MMIO
>>>>>> * - Host RAM
>>>>>> * - PCI aperture
>>>>>> * - Static shared memory regions, which are described by special
>>>>>> property
>>>>>> * "xen,shared-mem"
>>>>>> + * - xen,reg mappings
>>>>>> */
>>>>>> static int __init find_memory_holes(const struct kernel_info *kinfo,
>>>>>> struct membanks *ext_regions)
>>>>>> @@ -914,6 +916,13 @@ static int __init find_memory_holes(const struct
>>>>>> kernel_info *kinfo,
>>>>>> }
>>>>>> }
>>>>>>
>>>>>> + if ( kinfo->xen_reg_assigned )
>>>>>> + {
>>>>>> + res = rangeset_subtract(mem_holes, kinfo->xen_reg_assigned);
>>>>>> + if ( res )
>>>>>> + goto out;
>>>>>> + }
>>>>>> +
>>>>>> start = 0;
>>>>>> end = (1ULL << p2m_ipa_bits) - 1;
>>>>>> res = rangeset_report_ranges(mem_holes, PFN_DOWN(start),
>>>>>> PFN_DOWN(end),
>>>>>> @@ -994,11 +1003,30 @@ static int __init find_domU_holes(const struct
>>>>>> kernel_info *kinfo,
>>>>>> return res;
>>>>>> }
>>>>>>
>>>>>> +static int __init rangeset_to_membank(unsigned long s_gfn, unsigned
>>>>>> long e_gfn,
>>>>>> + void *data)
>>>>>> +{
>>>>>> + struct membanks *membank = data;
>>>>>> + paddr_t s = pfn_to_paddr(s_gfn);
>>>>>> + paddr_t e = pfn_to_paddr(e_gfn + 1) - 1;
>>>>>> +
>>>>>> + if ( membank->nr_banks >= membank->max_banks )
>>>>>> + return 0;
>>>>>> +
>>>>>> + membank->bank[membank->nr_banks].start = s;
>>>>>> + membank->bank[membank->nr_banks].size = e - s + 1;
>>>>>> + membank->nr_banks++;
>>>>>> +
>>>>>> + return 0;
>>>>>> +}
>>>>>> +
>>>>>> static int __init find_host_extended_regions(const struct kernel_info
>>>>>> *kinfo,
>>>>>> struct membanks
>>>>>> *ext_regions)
>>>>>> {
>>>>>> int res;
>>>>>> struct membanks *gnttab = membanks_xzalloc(1, MEMORY);
>>>>>> + struct membanks *xen_reg = membanks_xzalloc(
>>>>>> + max(1, rangeset_count_ranges(kinfo->xen_reg_assigned)), MEMORY);
>>>>> You allocate at least 1 membank even though xen_reg_assigned may be empty
>>>>> because:
>>>>> - this function is called for hwdom - no xen,reg
>>>>> - there may be no xen,reg i.e. no passthrough
>>>>
>>>> Ah, sorry, there's no need to allocate at least 1. This can just be:
>>>>
>>>> struct membanks *xen_reg = membanks_xzalloc(
>>>> rangeset_count_ranges(kinfo->arch.xen_reg_assigned), MEMORY);
>>> No, it cannot. membanks_xzalloc() calls xzalloc_flex_struct(). If you pass 0
>>> as size, the latter will calculate offset to FAM[0]. In other words, the
>>> allocation will succeed but only for members up to FAM[0] (i.e. only for
>>> struct
>>> membanks_hdr).
>>
>> If we pass 0 as the size, these members (and their ->common.*
>> counterparts) will be allocated:
>> xen_reg->nr_banks
>> xen_reg->max_banks
>> xen_reg->type
>>
>> but there will not be allocated any space for the flexible array member:
>> xen_reg->bank[]
>>
>> Since ->max_banks will be set to 0, and ->nr_banks shouldn't exceed
>> ->max_banks, it should work. At least for the (inner) loop in
>> find_unallocated_memory(), when ->nr_banks is 0, it won't dereference
>> ->bank[]. FWIW, I also tested this with UBSAN enabled.
>>
>> I admit it does give me a weird feeling not allocating any space for a
>> member in a struct, but it's a C standard flexible array member, and the
>> array's size would be 0. We deviated relevant MISRA rule 18.7 in
>> b87697fc1a6f ("automation/eclair: fully deviate MISRA C:2012 Rules 5.7
>> and 18.7").
>>
>> With that said, I'd be happy either way (i.e. either allocating exactly
>> what's returned by rangeset_count_ranges() or max(1,
>> rangeset_count_ranges()), but I just want to ensure we have the same
>> understanding on the technicalities.
> Hmm, why do you want to allocate memory in the first place? If
> xen_reg_assigned
> is NULL, we should not allocate anything. Instead you suggest to allocate
> either
> full structure or part of it. That's where I disagree.
Ah, alright, I see now. I'll conditionally allocate xen_reg, and ...
>>
>>> Also, even if you conditionally allocate for xen_reg_assigned or set NULL,
>>> in
>>> latter case you will end up with mem_banks containing NULL member. AFAICT
>>> that's
>>> not something expected by the users of mem_banks (+ it gives unneeded
>>> iteration).
>>
>> Agreed, it would be a bad idea to set xen_reg = NULL (leading to a NULL
>> member in mem_banks), because then find_unallocated_memory() would not
>> be happy.
... I'll adjust find_unallocated_memory() to skip over NULL entries.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |