[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions

> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of 
> problem and alternate
> solutions
> On Tue, 2013-01-22 at 19:22 +0000, Dan Magenheimer wrote:
> > > I don't mean that you'd have to do all of that now, but if you were
> > > considering moving in that direction, an easy first step would be to add
> > > a hook allowing tmem to veto allocations for VMs under its control.
> > > That would let tmem have proper control over its client VMs (so it can
> > > solve the delayed-failure race for you), while at the same time being a
> > > constructive step towards a more complete memory scheduler.
> >
> > While you are using different words, you are describing what
> > tmem does today.  Tmem does have control and uses the existing
> > hypervisor mechanisms and the existing hypervisor lock for memory
> > allocation.  That's why it's so clean to solve the "delayed-failure
> > race" using the same lock.
> So it sounds like it would easily be possible to solve this issue via a
> tmem hook as Tim suggests?

Hmmm... I see how my reply might be interpreted that way,
so let me rephrase and add some different emphasis:

Tmem already has "proper" control over its client VMs:
The only constraints tmem needs to enforce are the
d->max_pages value which was set when the guest launched,
and total physical RAM.  It's no coincidence that these
are the same constraints enforced by the existing
hypervisor allocator mechanisms inside existing hypervisor
locks.  And tmem is already a very large step towards
a complete memory scheduler.

But tmem is just a user of the existing hypervisor
allocator and locks.  It doesn't pretend to be able to
supervise or control all allocations; that's the job of
the hypervisor allocator.  Tmem only provides services
to guests, some of which require allocating memory
to store data on behalf of the guest.  And some of
those allocations do not increase d->tot_pages and some
do. (I can further explain why if you wish.)

So a clean solution to the "delayed-failure race" is
to use the same hypervisor allocator locks used by
all other allocations (including tmem and in-guest
ballooning).  That's exactly what XENMEM_claim_pages does.

Heh, I suppose you could rename XENMEM_claim_pages to be
XENMEM_tmem_claim_pages without changing the semantics
or any other code in the patch, and then this issue
would indeed be solved by a "tmem hook".


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.