|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 18/19] xen/arm: mm: Rework setup_xenheap_mappings()
On Mon, 21 Feb 2022, Julien Grall wrote:
> From: Julien Grall <julien.grall@xxxxxxx>
>
> The current implementation of setup_xenheap_mappings() is using 1GB
> mappings. This can lead to unexpected result because the mapping
> may alias a non-cachable region (such as device or reserved regions).
> For more details see B2.8 in ARM DDI 0487H.a.
>
> map_pages_to_xen() was recently reworked to allow superpage mappings,
> support contiguous mapping and deal with the use of pagge-tables before
pagetables
> they are mapped.
>
> Most of the code in setup_xenheap_mappings() is now replaced with a
> single call to map_pages_to_xen().
>
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
>
> ---
> Changes in v3:
> - Don't use 1GB mapping
> - Re-order code in setup_mm() in a separate patch
>
> Changes in v2:
> - New patch
> ---
> xen/arch/arm/mm.c | 87 ++++++++++-------------------------------------
> 1 file changed, 18 insertions(+), 69 deletions(-)
Very good!
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 11b6b60a2bc1..4af59375d998 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -138,17 +138,6 @@ static DEFINE_PAGE_TABLE(cpu0_pgtable);
> static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
> #endif
>
> -#ifdef CONFIG_ARM_64
> -/* The first page of the first level mapping of the xenheap. The
> - * subsequent xenheap first level pages are dynamically allocated, but
> - * we need this one to bootstrap ourselves. */
> -static DEFINE_PAGE_TABLE(xenheap_first_first);
> -/* The zeroeth level slot which uses xenheap_first_first. Used because
> - * setup_xenheap_mappings otherwise relies on mfn_to_virt which isn't
> - * valid for a non-xenheap mapping. */
> -static __initdata int xenheap_first_first_slot = -1;
> -#endif
> -
> /* Common pagetable leaves */
> /* Second level page tables.
> *
> @@ -815,77 +804,37 @@ void __init setup_xenheap_mappings(unsigned long
> base_mfn,
> void __init setup_xenheap_mappings(unsigned long base_mfn,
> unsigned long nr_mfns)
> {
> - lpae_t *first, pte;
> - unsigned long mfn, end_mfn;
> - vaddr_t vaddr;
> -
> - /* Align to previous 1GB boundary */
> - mfn = base_mfn & ~((FIRST_SIZE>>PAGE_SHIFT)-1);
> + int rc;
>
> /* First call sets the xenheap physical and virtual offset. */
> if ( mfn_eq(xenheap_mfn_start, INVALID_MFN) )
> {
> + unsigned long mfn_gb = base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) - 1);
> +
> xenheap_mfn_start = _mfn(base_mfn);
> xenheap_base_pdx = mfn_to_pdx(_mfn(base_mfn));
> + /*
> + * The base address may not be aligned to the first level
> + * size (e.g. 1GB when using 4KB pages). This would prevent
> + * superpage mappings for all the regions because the virtual
> + * address and machine address should both be suitably aligned.
> + *
> + * Prevent that by offsetting the start of the xenheap virtual
> + * address.
> + */
> xenheap_virt_start = DIRECTMAP_VIRT_START +
> - (base_mfn - mfn) * PAGE_SIZE;
> + (base_mfn - mfn_gb) * PAGE_SIZE;
> }
[...]
> + rc = map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn),
> + _mfn(base_mfn), nr_mfns,
> + PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
> + if ( rc )
> + panic("Unable to setup the xenheap mappings.\n");
I understand the intent of the code and I like it. maddr_to_virt is
implemented as:
return (void *)(XENHEAP_VIRT_START -
(xenheap_base_pdx << PAGE_SHIFT) +
((ma & ma_va_bottom_mask) |
((ma & ma_top_mask) >> pfn_pdx_hole_shift)));
The PDX stuff is always difficult to follow and I cannot claim that I
traced through exactly what the resulting virtual address in the mapping
would be for a given base_mfn, but the patch looks correct compared to
the previous code.
Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |