[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] initial ballooning amount on HVM+PoD



On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
> >> max_pfn/num_physpages isn't that far off for guest with less than
> >> 4Gb, the number calculated from the PoD data is a little worse.
> > 
> > On ARM RAM may not start at 0 and so using max_pfn can be very
> > misleading and in practice causes arm to balloon down to 0 as fast as it
> > can.
> 
> Ugly. Is that only due to the temporary workaround for there not
> being an IOMMU?

It's not to do with IOMMUs, no, and it isn't temporary.

Architecturally on ARM its not required for RAM to be at address 0 and
it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
SoC design).

If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
but target pages is just 0x8000, if current_pages is initialised to
max_pfn then the kernel immediately thinks it has to get rid of 0x800000
pages.

> And short of the initial value needing to be architecture specific -
> can you see a calculation that would yield a decent result on ARM
> that would also be suitable on x86?

I previously had a patch to use memblock_phys_mem_size(), but when I saw
Boris switch to get_num_physpages() I thought that would be OK, but I
didn't look into it very hard. Without checking I suspect they return
pretty much the same think and so memblock_phys_mem_size will have the
same issue you observed (which I confess I haven't yet gone back and
understood).

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.