[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: setup_xenheap_mappings: BUG when alloc_boot_pages



Hello,

On Sat, Feb 16, 2019 at 2:36 PM Peng Fan <peng.fan@xxxxxxx> wrote:
>
> On ARM64, bootmem is initialized after setup_xenheap_mappings,
> so we should not call alloc_boot_pages in setup_xenheap_mappings.
>
> We could not simply move setup_xenheap_mappings after init_boot_pages,
> because in bootmem_region_add, when bootmem_region_list is NULL,
> it will assign the first virtual address of the first bootmem region
> to bootmem_region_list. If the bootmem is not mapped, it will trigger
> data abort when writting to the bootmem_region_list[] area.
>
> Currently we did not met issue, because FIRST_SIZE is 1GB on ARM64,
> and xenheap_first_first table could hold up to 512GB virtual memory
> region. we do not have SoC support such large DRAM now, but we might
> have in future.

"We" is a bit vague. Do you mean NXP or Arm ecosystem? If the latter,
there are definitely platform that can support more than 512GB.
Xen supports some of them, but I don't think someone ever tried to
boot Xen with that much memory.

>
> Add BUG() to let people be aware of this issue.

While I understand the problem, I don't think the BUG() is the correct approach.

Firstly, alloc_boot_pages can only return if a page was allocated.
Otherwise, Xen will crash (see various BUG()). So adding a BUG() is a
bit pointless.
Secondly, we are meant to support up to 5TB of RAM (see [1]) and there
are platform out there supporting more than 512GB.

This means this is a bug in the code that should be fixed. One
solution I can see is to rework setup_xenheap_mappings to call
init_boot_pages.

Cheers,

[1] https://lists.xenproject.org/archives/html/xen-devel/2018-12/msg00881.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.