[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions

On 09/01/13 14:44, Dan Magenheimer wrote:
From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of 
problem and alternate

On Tue, 2013-01-08 at 19:41 +0000, Dan Magenheimer wrote:
[1] A clarification: In the Oracle model, there is only maxmem;
i.e. current_maxmem is always the same as lifetime_maxmem;
This is exactly what I am proposing that you change in order to
implement something like the claim mechanism in the toolstack.

If your model is fixed in stone and cannot accommodate changes of this
type then there isn't much point in continuing this conversation.

I think we need to agree on this before we consider the rest of your
mail in detail, so I have snipped all that for the time being.
Agreed that it is not fixed in stone.  I should have said
"In the _current_ Oracle model" and that footnote was only for
comparison purposes.  So, please, do proceed in commenting on the
two premises I outlined.
i.e. d->max_pages is fixed for the life of the domain and
only d->tot_pages varies; i.e. no intelligence is required
in the toolstack.  AFAIK, the distinction between current_maxmem
and lifetime_maxmem was added for Citrix DMC support.
I don't believe Xen itself has any such concept, the distinction is
purely internal to the toolstack and which value it chooses to push down
to d->max_pages.
Actually I believe a change was committed to the hypervisor specifically
to accommodate this.  George mentioned it earlier in this thread...
I'll have to dig to find the specific changeset but the change allows
the toolstack to reduce d->max_pages so that it is (temporarily)
less than d->tot_pages.  Such a change would clearly be unnecessary
if current_maxmem was always the same as lifetime_maxmem.

Not exactly. You could always change d->max_pages; and so there was never a concept of "lifetime_maxmem" inside of Xen.

The change I think you're talking about is this. While you could always change d->max_pages, it used to be the case that if you tried to set d->max_pages to a value less than d->tot_pages, it would return -EINVAL*. What this meant was that if you wanted to use d->max_pages to enforce a ballooning request, you had to do the following:
 1. Issue a balloon request to the guest
 2. Wait for the guest to successfully balloon down to the new target
 3. Set d->max_pages to the new target.

The waiting made the logic more complicated, and also introduced a race between steps 2 and 3. So the change was made so that Xen would tolerate setting max_pages to less than tot_pages. Then things looked like this:
 1. Set d->max_pages to the new target
 2. Issue a balloon request to the guest.

The new semantics guaranteed that the guest would not be able to "change its mind" and ask for memory back after freeing it without the toolstack needing to closely monitor the actual current usage.

But even before the change, it was still possible to change max_pages; so the change doesn't have any bearing on the discussion here.


* I may have some of the details incorrect (e.g., maybe it was d->tot_pages+something else, maybe it didn't return -EINVAL but failed in some other way), but the general idea is correct.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.