|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
On 19.06.2023 15:34, Oleksii Kurochko wrote:
> Since it is not easy to keep track where the identity map was mapped,
> so we will look for the top-level entry exclusive to the identity
> map and remove it.
I think you mean "top-most" or some such, as it's not necessarily the
top-level entry you zap.
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -1,3 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> #ifndef __RISCV_CONFIG_H__
> #define __RISCV_CONFIG_H__
>
Unrelated change?
> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
> #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
> #define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
>
> +/*
> + * Should be removed as soon as enough headers will be merged for inclusion
> of
> + * <xen/lib.h>.
> + */
> +#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
> +
> /*
> * It is expected that Xen won't be more then 2 MB.
> * The check in xen.lds.S guarantees that.
> @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
> *
> * It might be needed one more page table in case when Xen load address
> * isn't 2 MB aligned.
> + *
> + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for identity mapping.
> */
> -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
How come the extra page (see the comment sentence in context) isn't
needed for the identity-mapping case?
> @@ -255,25 +262,30 @@ void __init noreturn noinline enable_mmu()
> csr_write(CSR_SATP,
> PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> RV_STAGE1_MODE << SATP_MODE_SHIFT);
> +}
>
> - asm volatile ( ".p2align 2" );
> - mmu_is_enabled:
> - /*
> - * Stack should be re-inited as:
> - * 1. Right now an address of the stack is relative to load time
> - * addresses what will cause an issue in case of load start address
> - * isn't equal to linker start address.
> - * 2. Addresses in stack are all load time relative which can be an
> - * issue in case when load start address isn't equal to linker
> - * start address.
> - *
> - * We can't return to the caller because the stack was reseted
> - * and it may have stash some variable on the stack.
> - * Jump to a brand new function as the stack was reseted
> - */
> +void __init remove_identity_mapping(void)
> +{
> + unsigned int i;
> + pte_t *pgtbl;
> + unsigned int index, xen_index;
> + unsigned long load_addr = LINK_TO_LOAD(_start);
>
> - switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
> - cont_after_mmu_is_enabled);
> + for ( pgtbl = stage1_pgtbl_root, i = 0;
> + i <= (CONFIG_PAGING_LEVELS - 1);
i < CONFIG_PAGING_LEVELS ? But maybe it would be easier for i to start
at CONFIG_PAGING_LEVELS and be decremented, simplifying ...
> + i++ )
> + {
> + index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, load_addr);
> + xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i, XEN_VIRT_START);
... these two expressions?
> + if ( index != xen_index )
> + {
> + pgtbl[index].pte = 0;
> + break;
> + }
Is this enough? When load and link address are pretty close (but not
overlapping), can't they share a leaf table, in which case you need
to clear more than just a single entry? The present overlap check
looks to be 4k-granular, not 2M (in which case this couldn't happen;
IOW adjusting the overlap check may also be a way out).
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |