[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/3] xen/riscv: setup initial pagetables



Hi,

On 27/02/2023 17:17, Oleksii wrote:
On Sat, 2023-02-25 at 18:05 +0000, Julien Grall wrote:
Hi,

On 24/02/2023 15:06, Oleksii Kurochko wrote:
Calculate load and linker linker image addresses and
setup initial pagetables.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
---
   xen/arch/riscv/setup.c | 11 +++++++++++
   1 file changed, 11 insertions(+)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index b7cd438a1d..f69bc278bb 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,9 +1,11 @@
   #include <xen/bug.h>
   #include <xen/compile.h>
   #include <xen/init.h>
+#include <xen/kernel.h>
  #include <asm/csr.h>
   #include <asm/early_printk.h>
+#include <asm/mm.h>
   #include <asm/traps.h>
  /* Xen stack for bringing up the first CPU. */
@@ -43,6 +45,11 @@ static void __init disable_fpu(void)
  void __init noreturn start_xen(void)
   {
+    unsigned long load_start    = (unsigned long)start;
+    unsigned long load_end      = load_start + (unsigned
long)(_end - _start);

I am a bit puzzled, on top of load_addr() and linker_addr(), you
wrote
it can't use global variable/function. But here... you are using
them.
So how is this different?
I don't use load_addr() and linker_addr() macros here.

I understand that. But my comment was related to:

/*
* WARNING: load_addr() and linker_addr() are to be called only when the MMU is * disabled and only when executed by the primary CPU. They cannot refer to
 * any global variable or functions.
 */

_start and _end are global variables. So why can you use them here but not there?

If you could use them in load_addr() then you could simplify a lot your logic.


+    unsigned long linker_start  = (unsigned long)_start;
+    unsigned long linker_end    = (unsigned long)_end;

I am a bit confused with how you define the start/end for both the
linker and load. In one you use _start and the other _end.

Both are fixed at compile time, so I assume the values will be a
linked
address rather than the load address. So how is this meant to how?

_start, _end - it is label from linker script so I use them to define
linker_start and linker_end addresses.

load_start is defined as an address of start() function from head.S and
load_end is the load_start + the size  (_end - _start)

I think you misunderstood my comment. I understand what the variables are for. But I don't understand the computation because Xen could be loaded at a different address than the runtime address.


Furthermore, I would expect linker_start and load_start to point to
the
same symbol (the only different is one store the virtual address
whereas
the other the physical address). But here you are technically using
two
different symbol. Can you explain why?
It is used to make identity mapping for the range [load_addr, load_end]
and [linker_addr, linker_end]. It was done so because in Bobby's
patches in the linker script XEN_VIRT_START is defined as
_AT(vaddr_t,0x00200000) but bootloader loads Xen at 0x80200000 and so
in this case loadr_addr != linker_addr.
But I have changed XEN_VIRT_START to 0x8020...00 so they are equal now.

So this will be broken as soon as this code will be tested on an hardware where there is no RAM at 0x8020...00. I would strongly recommend for you to test your code with XEN_VIRT_START != load address.


+
       /*
        * The following things are passed by bootloader:
        *   a0 -> hart_id
@@ -65,6 +72,10 @@ void __init noreturn start_xen(void)
      test_macros_from_bug_h(); +    setup_initial_pagetables(load_start, load_end, linker_start,
linker_end);

Shouldn't this happen earlier in start_xen()?
It can. If to be honest I don't know if it should. I added at the end
only because it was the last thing I worked on...

I think we should enable the MMU and switch to the runtime mapping as early as possible.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.