|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 4/6] xen/riscv: introduce identity mapping
On Fri, 2023-07-07 at 12:51 +0200, Jan Beulich wrote:
> On 07.07.2023 12:37, Oleksii wrote:
> > On Thu, 2023-07-06 at 13:35 +0200, Jan Beulich wrote:
> > > On 19.06.2023 15:34, Oleksii Kurochko wrote:
> > > > --- a/xen/arch/riscv/include/asm/config.h
> > > > +++ b/xen/arch/riscv/include/asm/config.h
> > > > @@ -1,3 +1,5 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0-only */
> > > > +
> > > > #ifndef __RISCV_CONFIG_H__
> > > > #define __RISCV_CONFIG_H__
> > > >
> > >
> > > Unrelated change?
> > It should be part of [PATCH v2 5/6] xen/riscv: introduce identity
> > mapping.
>
> Hmm, here we're discussing "[PATCH v2 4/6] xen/riscv: introduce
> identity
> mapping". I'm confused, I guess.
Sorry for confusion. i meant the patch: [PATCH v2 5/6] xen/riscv: add
SPDX tags.
>
> > > > --- a/xen/arch/riscv/mm.c
> > > > +++ b/xen/arch/riscv/mm.c
> > > > @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
> > > > #define LOAD_TO_LINK(addr) ((unsigned long)(addr) -
> > > > phys_offset)
> > > > #define LINK_TO_LOAD(addr) ((unsigned long)(addr) +
> > > > phys_offset)
> > > >
> > > > +/*
> > > > + * Should be removed as soon as enough headers will be merged
> > > > for
> > > > inclusion of
> > > > + * <xen/lib.h>.
> > > > + */
> > > > +#define ARRAY_SIZE(arr) (sizeof(arr) /
> > > > sizeof((arr)[0]))
> > > > +
> > > > /*
> > > > * It is expected that Xen won't be more then 2 MB.
> > > > * The check in xen.lds.S guarantees that.
> > > > @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
> > > > *
> > > > * It might be needed one more page table in case when Xen
> > > > load
> > > > address
> > > > * isn't 2 MB aligned.
> > > > + *
> > > > + * (CONFIG_PAGING_LEVELS - 1) page tables are needed for
> > > > identity
> > > > mapping.
> > > > */
> > > > -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> > > > +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 +
> > > > 1)
> > >
> > > How come the extra page (see the comment sentence in context)
> > > isn't
> > > needed for the identity-mapping case?
> > It is needed to allocate no more than two 'nonroot' page tables (L0
> > and
> > L1 in case of Sv39 ) as page 'root' table ( L2 in case of Sv39 ) is
> > always re-used.
> >
> > The same ( only 'nonroot' page tables might be needed to allocate )
> > works for any MMU mode.
>
> Of course, but if you cross a 2Mb boundary you'll need 2 L0 tables.
Yes, in the case of crossing a 2Mb boundary, it will require 2 L0
tables.
Then, the number of required page tables is needed depending on Xen
size and load address alignment. Because for each 2Mb, we need a new L0
table.
Sure, this is not needed now ( as in xen.lds.S, we have a Xen size
check ), but if someone increases Xen size binary to 4Mb, then the
amount of page tables should be updated too.
Should we take into account that?
>
> > > > @@ -255,25 +262,30 @@ void __init noreturn noinline
> > > > enable_mmu()
> > > > csr_write(CSR_SATP,
> > > > PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> > > > RV_STAGE1_MODE << SATP_MODE_SHIFT);
> > > > +}
> > > >
> > > > - asm volatile ( ".p2align 2" );
> > > > - mmu_is_enabled:
> > > > - /*
> > > > - * Stack should be re-inited as:
> > > > - * 1. Right now an address of the stack is relative to
> > > > load
> > > > time
> > > > - * addresses what will cause an issue in case of load
> > > > start
> > > > address
> > > > - * isn't equal to linker start address.
> > > > - * 2. Addresses in stack are all load time relative which
> > > > can
> > > > be an
> > > > - * issue in case when load start address isn't equal to
> > > > linker
> > > > - * start address.
> > > > - *
> > > > - * We can't return to the caller because the stack was
> > > > reseted
> > > > - * and it may have stash some variable on the stack.
> > > > - * Jump to a brand new function as the stack was reseted
> > > > - */
> > > > +void __init remove_identity_mapping(void)
> > > > +{
> > > > + unsigned int i;
> > > > + pte_t *pgtbl;
> > > > + unsigned int index, xen_index;
> > > > + unsigned long load_addr = LINK_TO_LOAD(_start);
> > > >
> > > > - switch_stack_and_jump((unsigned long)cpu0_boot_stack +
> > > > STACK_SIZE,
> > > > - cont_after_mmu_is_enabled);
> > > > + for ( pgtbl = stage1_pgtbl_root, i = 0;
> > > > + i <= (CONFIG_PAGING_LEVELS - 1);
> > >
> > > i < CONFIG_PAGING_LEVELS ? But maybe it would be easier for i to
> > > start
> > > at CONFIG_PAGING_LEVELS and be decremented, simplifying ...
> > >
> > > > + i++ )
> > > > + {
> > > > + index = pt_index(CONFIG_PAGING_LEVELS - 1 - i,
> > > > load_addr);
> > > > + xen_index = pt_index(CONFIG_PAGING_LEVELS - 1 - i,
> > > > XEN_VIRT_START);
> > >
> > > ... these two expressions?
> > It makes sense. I'll update this part of the code.
> >
> > >
> > > > + if ( index != xen_index )
> > > > + {
> > > > + pgtbl[index].pte = 0;
> > > > + break;
> > > > + }
> > >
> > > Is this enough? When load and link address are pretty close (but
> > > not
> > > overlapping), can't they share a leaf table, in which case you
> > > need
> > > to clear more than just a single entry? The present overlap check
> > > looks to be 4k-granular, not 2M (in which case this couldn't
> > > happen;
> > > IOW adjusting the overlap check may also be a way out).
> > At the start of setup_initial_pagetables() there is a code which
> > checks
> > that load and link address don't overlap:
> >
> > if ( (linker_start != load_start) &&
> > (linker_start <= load_end) && (load_start <= linker_end) )
> > {
> > early_printk("(XEN) linker and load address ranges
> > overlap\n");
> > die();
> > }
> >
> > So the closest difference between load and link address can be 4kb.
> > Otherwise load and link address ranges are equal ( as we can't map
> > less
> > then 4kb).
> >
> > Let's take concrete examples:
> > Load address range is 0x8020_0000 - 0x8020_0FFF
> > Linker address range is 0x8020_1000 - 0x8020_1FFF
> > MMU mode: Sv39 ( so we have 3 page tables )
> >
> > So we have:
> > * L2 index = 2, L1 index = 1, L0 index = 0 for load address
> > * L2 index = 2, L1 index = 1, L0 index = 1 for linker address
> > Thereby we have two different L0 tables for load and linker
> > address
> > ranges.
> > And it looks like it is pretty safe to remove only one L0 index
> > that
> > was used for identity mapping.
> >
> > Is it possible that I missed something?
>
> Looks as if you are thinking of only a Xen which fits in 4k. The code
> here wants to cope with Xen getting quite a bit bigger.
Yeah, I missed that when I tried to come up with an example.
So it will probably be more universal if we recursively go through the
whole identity mapping and unmap each pte individually.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |