|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 7/8] tools: arm: support up to (almost) 1TB of guest RAM
Ian Campbell writes:
> This creates a second bank of RAM starting at 8GB and potentially extending \
to
> the 1TB boundary, which is the limit imposed by our current use of a 3 level
> p2m with 2 pages at level 0 (2^40 bits).
>
> I've deliberately left a gap between the two banks just to exercise those co\
de paths.
>
> The second bank is 1016GB in size which plus the 3GB below 4GB is 1019GB
> maximum guest RAM. At the point where the fact that this is slightly less th\
an
> a full TB starts to become an issue for people then we can switch to a 4 lev\
el
Your commit messages have wrap damage when quoted. Can you make your
editor shrink them a bit ?
> - const uint64_t ram0size = ramsize;
> + const uint64_t ram0size =
> + ramsize > GUEST_RAM0_SIZE ? GUEST_RAM0_SIZE : ramsize;
> const uint64_t ram0end = GUEST_RAM0_BASE + ram0size;
> + const uint64_t ram1size =
> + ramsize > ram0size ? ramsize - ram0size : 0;
> + const uint64_t ram1end = GUEST_RAM1_BASE + ram1size;
Why ram0 and ram1 not ram[0] and ram[1] ? If the latter
> + if ((rc = populate_guest_memory(dom,
> + GUEST_RAM1_BASE >> XC_PAGE_SHIFT,
> + ram1size >> XC_PAGE_SHIFT)))
> + return rc;
this could perhaps be a loop. (You would need
const uint64_t guest_ram_size[2] = { GUEST_RAM_SIZES };
or, if you must,
const uint64_t guest_ram_size[2] = { GUEST_RAM0_SIZE, GUEST_RAM1_SIZE };
or something.)
> +static int make_memory_node(libxl__gc *gc, void *fdt, uint64_t size)
> +{
> + int res;
> + /* This had better match libxc's arch_setup_meminit... */
> + const uint64_t size0 = size > GUEST_RAM0_SIZE ? GUEST_RAM0_SIZE : size;
> + const uint64_t size1 = size > GUEST_RAM0_SIZE ? size - size0 : 0;
Glrgk. Is it necessary to have this in two places ?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |