[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] arm64: Approach for DT based NUMA and issues



Hi Vijay,

On 28/11/16 15:05, Vijay Kilari wrote:
On Mon, Nov 28, 2016 at 7:20 PM, Andre Przywara <andre.przywara@xxxxxxx> wrote:
On 26/11/16 06:59, Vijay Kilari wrote:

3) Parsing Memory nodes:
--------------------------------------
For all the DT memory nodes in the flattend DT, start address, size
and numa-node-id value is extracted and stored in "node_memblk_range[]"
which is of type struct node.

Each bootinfo.mem entry from UEFI is verified against node_memblk_range[] and
NODE_DATA is populated with start PFN, end PFN and nodeid.

Populating memnodemap:

The memnodemap[] is allocated from heap and using the NODE_DATA structure,
the memnodemap[] is populated with nodeid for each page index.

This memnodemap info is used to fetch memory node id for a given page
by calling phys_to_nid() by memory allocator.

ISSUE: phys_to_nid() is called by memory allocator before memnodemap[]
is initialized.

Since memnodemap[] is allocated from heap, and hence boot allocator should
be initialized. The boot_allocator() needs phys_to_nid() which is not
available untill memnodemap[] is initialized. So there is deadlock situation
during initialization. To overcome this phsy_to_nid() should rely on
node_memblk_range[] to get nodeid untill memnodemap[] is initialized.

What about having an early boot fallback: like:

nodeid_t phys_to_nid(paddr_t addr)
{
        if (!memnodemap)
                return 0;
        ....
}

The memory allocator has all the nodes memory from bootinfo.mem
So, memory allocator fails when phys_to_nid() returns 0 for node 1 memory.

Why don't you allocate memory using the early boot allocator (see alloc_boot_pages) as it is done on x86 (see xen/arch/x86/numa.c)?

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.