[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Subject: RE: Proposed new "memory capacity claim" hypercall/feature
> 
> >>> On 30.10.12 at 16:43, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> wrote:
> > With tmem, memory "owned" by domain (d.tot_pages) increases dynamically
> > in two ways: selfballooning and persistent puts (aka frontswap),
> > but is always capped by d.max_pages.  Neither of these communicate
> > to the toolstack.
> >
> > Similarly, tmem (or selfballooning) may be dynamically freeing up lots
> > of memory without communicating to the toolstack, which could result in
> > the toolstack rejecting a domain launch believing there is insufficient
> > memory.
> >
> > I am thinking the "claim" hypercall/subop eliminates these problems
> > and hope you agree!
> 
> With tmem being the odd one here, wouldn't it make more sense
> to force it into no-alloc mode (apparently not exactly the same as
> freezing all pools) for the (infrequent?) time periods of domain
> creation, thus not allowing the amount of free memory to drop
> unexpectedly? Tmem could, during these time periods, still itself
> internally recycle pages (e.g. fulfill a persistent put by discarding
> an ephemeral page).

Hi Jan --

Freeze has some unattractive issues that "claim" would solve
(see below) and freeze (whether ephemeral pages are used or not)
blocks allocations due to tmem, but doesn't block allocations due
to selfballooning (or manual ballooning attempts by a guest user
with root access).  I suppose the tmem freeze implementation could
be extended to also block all non-domain-creation ballooning
attempts but I'm not sure if that's what you are proposing.

To digress for a moment first, the original problem exists both in
non-tmem systems AND tmem systems.  It has been seen in the wild on
non-tmem systems.  I am involved with proposing a solution primarily
because, if the solution is designed correctly, it _also_ solves a
tmem problem.  (And as long as we have digressed, I believe it _also_
solves a page-sharing problem on non-tmem systems.)  That said,
here's the unattractive tmem freeze/thaw issue, first with
the existing freeze implementation.

Suppose you have a huge 256GB machine and you have already launched
a 64GB tmem guest "A".  The guest is idle for now, so slowly
selfballoons down to maybe 4GB.  You start to launch another 64GB
guest "B" which, as we know, is going to take some time to complete.
In the middle of launching "B", "A" suddenly gets very active and
needs to balloon up as quickly as possible or it can't balloon fast
enough (or at all if "frozen" as suggested) so starts swapping (and,
thanks to Linux frontswap, the swapping tries to go to hypervisor/tmem
memory).  But ballooning and tmem are both blocked and so the
guest swaps its poor little butt off even though there's >100GB
of free physical memory available.

Let's add in your suggestion, that a persistent put can be fulfilled
by discarding an ephemeral page.  I see two issues:  First, it
requires the number of ephemeral pages available to be larger
than the number of persistent pages required; this may not always
be true, though most of the time it will be true.  Second, the second
domain creation activity may have been assuming that it could use
some (or all) of the freeable pages, which have now been absorbed by
the first guest's persistent puts.  So I think "claim" is still
needed anyway.

Comments?

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.