[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 14/17] xen/arm64: head: Remove ID map as soon as it is not used
On Mon, 10 Jun 2019, Julien Grall wrote: > The ID map may clash with other parts of the Xen virtual memory layout. > At the moment, Xen is handling the clash by only creating a mapping to > the runtime virtual address before enabling the MMU. > > The rest of the mappings (such as the fixmap) will be mapped after the > MMU is enabled. However, the code doing the mapping is not safe as it > replace mapping without using the Break-Before-Make sequence. > > As the ID map can be anywhere in the memory, it is easier to remove all > the entries added as soon as the ID map is not used rather than adding > the Break-Before-Make sequence everywhere. I think it is a good idea, but I would ask you to mention 1:1 map instead of "ID map" in comments and commit messages because that is the wording we used in all comments so far (see the one in create_page_tables and mm.c). It is easier to grep and refer to if we use the same nomenclature. Note that I don't care about which nomenclature we decide to use, I am only asking for consistency. Otherwise, it would also work if you say it both way at least once: The ID map (1:1 map) may clash [...] > It is difficult to track where exactly the ID map was created without a > full rework of create_page_tables(). Instead, introduce a new function > remove_id_map() will look where is the top-level entry for the ID map > and remove it. Do you think it would be worth simplifying this code below by preserving where/how the ID map was created? We could repurpose x25 for that, carrying for instance the address of the ID map section slot or a code number to specify which case we are dealing with. We should be able to turn remove_id_map into only ~5 lines? > The new function is only called for the boot CPU. Secondary CPUs will > switch directly to the runtime page-tables so there are no need to > remove the ID mapping. Note that this still doesn't make the Secondary > CPUs path safe but it is not making it worst. > > --- > Note that the comment refers to the patch "xen/arm: tlbflush: Rework > TLB helpers" under review (see [1]). > > Furthermore, it is very likely we will need to re-introduce the ID > map to cater secondary CPUs boot and suspend/resume. For now, the > attempt is to make boot CPU path fully Arm Arm compliant. > > [1] https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg01134.html > --- > xen/arch/arm/arm64/head.S | 86 > ++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 71 insertions(+), 15 deletions(-) > > diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S > index 192af3e8a2..96e85f8834 100644 > --- a/xen/arch/arm/arm64/head.S > +++ b/xen/arch/arm/arm64/head.S > @@ -300,6 +300,13 @@ real_start_efi: > ldr x0, =primary_switched > br x0 > primary_switched: > + /* > + * The ID map may clash with other parts of the Xen virtual memory > + * layout. As it is not used anymore, remove it completely to > + * avoid having to worry about replacing existing mapping > + * afterwards. > + */ > + bl remove_id_map > bl setup_fixmap > #ifdef CONFIG_EARLY_PRINTK > /* Use a virtual address to access the UART. */ > @@ -632,10 +639,68 @@ enable_mmu: > ret > ENDPROC(enable_mmu) > > +/* > + * Remove the ID map for the page-tables. It is not easy to keep track > + * where the ID map was mapped, so we will look for the top-level entry > + * exclusive to the ID Map and remove it. > + * > + * Inputs: > + * x19: paddr(start) > + * > + * Clobbers x0 - x1 > + */ > +remove_id_map: > + /* > + * Find the zeroeth slot used. Remove the entry from zeroeth > + * table if the slot is not 0. For slot 0, the ID map was either > + * done in first or second table. > + */ > + lsr x1, x19, #ZEROETH_SHIFT /* x1 := zeroeth slot */ > + cbz x1, 1f > + /* It is not in slot 0, remove the entry */ > + ldr x0, =boot_pgtable /* x0 := root table */ > + str xzr, [x0, x1, lsl #3] > + b id_map_removed > + > +1: > + /* > + * Find the first slot used. Remove the entry for the first > + * table if the slot is not 0. For slot 0, the ID map was done > + * in the second table. > + */ > + lsr x1, x19, #FIRST_SHIFT > + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ > + cbz x1, 1f > + /* It is not in slot 0, remove the entry */ > + ldr x0, =boot_first /* x0 := first table */ > + str xzr, [x0, x1, lsl #3] > + b id_map_removed > + > +1: > + /* > + * Find the second slot used. Remove the entry for the first > + * table if the slot is not 1 (runtime Xen mapping is 2M - 4M). > + * For slot 1, it means the ID map was not created. > + */ > + lsr x1, x19, #SECOND_SHIFT > + and x1, x1, #LPAE_ENTRY_MASK /* x1 := first slot */ > + cmp x1, #1 > + beq id_map_removed > + /* It is not in slot 1, remove the entry */ > + ldr x0, =boot_second /* x0 := second table */ > + str xzr, [x0, x1, lsl #3] > + > +id_map_removed: > + /* See asm-arm/arm64/flushtlb.h for the explanation of the sequence. > */ > + dsb nshst > + tlbi alle2 > + dsb nsh > + isb > + > + ret > +ENDPROC(remove_id_map) > + > setup_fixmap: > - /* Now we can install the fixmap and dtb mappings, since we > - * don't need the 1:1 map any more */ > - dsb sy > #if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */ > /* Add UART to the fixmap table */ > ldr x1, =xen_fixmap /* x1 := vaddr (xen_fixmap) */ > @@ -653,19 +718,10 @@ setup_fixmap: > ldr x1, =FIXMAP_ADDR(0) > lsr x1, x1, #(SECOND_SHIFT - 3) /* x1 := Slot for FIXMAP(0) */ > str x2, [x4, x1] /* Map it in the fixmap's slot */ > -#endif > > - /* > - * Flush the TLB in case the 1:1 mapping happens to clash with > - * the virtual addresses used by the fixmap or DTB. > - */ > - dsb sy /* Ensure any page table updates made > above > - * have occurred. */ > - > - isb > - tlbi alle2 > - dsb sy /* Ensure completion of TLB flush */ > - isb > + /* Ensure any page table updates made above have occurred */ > + dsb nshst > +#endif > ret > ENDPROC(setup_fixmap) > > -- > 2.11.0 > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |