|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 2/2] xen/arm: skip holes in physical address space when setting up frametable
On 01-May-26 17:00, Luca Fancellu wrote:
> Hi Michal,
>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index faef0efb327c..7297cca01551 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -63,7 +63,7 @@ void __init setup_mm(void)
>>
>> setup_mm_helper();
>>
>> - setup_frametable_mappings(ram_start, ram_end);
>> + init_frametable(ram_start);
>
> I think that now ram_end and bank_end can be removed
Right, will do.
>
>>
>> init_staticmem_pages();
>> init_sharedmem_pages();
>> diff --git a/xen/arch/arm/mmu/mm.c b/xen/arch/arm/mmu/mm.c
>> index 6604f3bf4e6a..dfc888c8ee0e 100644
>> --- a/xen/arch/arm/mmu/mm.c
>> +++ b/xen/arch/arm/mmu/mm.c
>> @@ -6,18 +6,45 @@
>> #include <xen/mm.h>
>> #include <xen/mm-frame.h>
>> #include <xen/pdx.h>
>> +#include <xen/sizes.h>
>> #include <xen/string.h>
>>
>> -/* Map a frame table to cover physical addresses ps through pe */
>> -void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>> +static void __init init_frametable_chunk(unsigned long pdx_s,
>> + unsigned long pdx_e)
>> {
>> - unsigned long nr_pdxs = mfn_to_pdx(mfn_add(maddr_to_mfn(pe), -1)) -
>> - mfn_to_pdx(maddr_to_mfn(ps)) + 1;
>> - unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
>> - mfn_t base_mfn;
>> - const unsigned long mapping_size = frametable_size < MB(32) ? MB(2)
>> - : MB(32);
>> + unsigned long nr_pdxs = pdx_e - pdx_s;
>> + unsigned long chunk_size = nr_pdxs * sizeof(struct page_info);
>> + unsigned long virt;
>> int rc;
>> + mfn_t base_mfn;
>> +
>> + /*
>> + * In-loop chunks span whole PDX groups, which are always page-size
>> + * aligned. The last chunk ending at max_pdx may not be, so round up.
>> + */
>> + chunk_size = ROUNDUP(chunk_size, PAGE_SIZE);
>> +
>> + /*
>> + * Align the allocation to the contiguous mapping size so that
>> + * map_pages_to_xen() can use the contiguous bit.
>> + */
>> + base_mfn = alloc_boot_pages(chunk_size >> PAGE_SHIFT,
>> + MB(32) >> PAGE_SHIFT);
>
> This fixed 32Mb alignment feels a bit more than we need, If for example the
> chunk is less than 32Mb? If we had some variable alignment for chunks less
> than 32MB we would maybe help alloc_boot_pages job, in the end if the chunk
> is less than 32Mb it won’t get the contiguous bit anyway.
Good point. On Arm64 this affects any chunk spanning fewer than 3 valid PDX
groups (~14MB per group). I'll use 32MB if chunk size >= 32MB, 2MB otherwise.
>
> But I’m fine also if you leave it as it is.
>
> With the above fixed:
>
> Reviewed-by: Luca Fancellu <luca.fancellu@xxxxxxx>
I can take this one but ...
> Tested-by: Luca Fancellu <luca.fancellu@xxxxxxx>
not this one given the change.
~Michal
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |