[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] initial ballooning amount on HVM+PoD



On Mon, 2014-01-20 at 08:01 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 17:54, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
> >> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> >> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
> >> >> max_pfn/num_physpages isn't that far off for guest with less than
> >> >> 4Gb, the number calculated from the PoD data is a little worse.
> >> > 
> >> > On ARM RAM may not start at 0 and so using max_pfn can be very
> >> > misleading and in practice causes arm to balloon down to 0 as fast as it
> >> > can.
> >> 
> >> Ugly. Is that only due to the temporary workaround for there not
> >> being an IOMMU?
> > 
> > It's not to do with IOMMUs, no, and it isn't temporary.
> > 
> > Architecturally on ARM its not required for RAM to be at address 0 and
> > it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
> > SoC design).
> > 
> > If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
> > but target pages is just 0x8000, if current_pages is initialised to
> > max_pfn then the kernel immediately thinks it has to get rid of 0x800000
> > pages.
> 
> And there is some sort of benefit from also doing this for virtual
> machines?

For dom0 the address space layout mirrors that of the underlying
platform.

For domU we mostly ended up using the layout of the platform we happened
to develop on for the virtual guest layout without much thought. This
actually has some shortcomings, in that it limits the amount of RAM a
guest can have under 4GB quite significantly, so in 4.5 we will probably
change this.

But for domU when we do device assignment we may also want to do
something equivalent to x86 "e820_host" option which also mirrors the
underlying address map for guests.

In any case I don't want to be encoding restrictions like "RAM starts at
zero" into ARM's guest ABI.

> 
> >> And short of the initial value needing to be architecture specific -
> >> can you see a calculation that would yield a decent result on ARM
> >> that would also be suitable on x86?
> > 
> > I previously had a patch to use memblock_phys_mem_size(), but when I saw
> > Boris switch to get_num_physpages() I thought that would be OK, but I
> > didn't look into it very hard. Without checking I suspect they return
> > pretty much the same think and so memblock_phys_mem_size will have the
> > same issue you observed (which I confess I haven't yet gone back and
> > understood).
> 
> If there's no reserved memory in that range, I guess ARM might
> be fine as is.

Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
memblock stuff and whether it is included in phys_mem_size -- I think
that stuff is still WIP upstream (i.e. the kernel currently ignores such
things, IIRC there was a fix but it was broken and reverted to try again
and then I've lost track).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.