[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of problem and alternate solutions



On Jan 2, 2013, at 4:43 PM, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> wrote:

>> From: Andres Lagar-Cavilla [mailto:andreslc@xxxxxxxxxxxxxx]
>> Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of 
>> problem and alternate
>> solutions
>> 
>> Hello,
> 
> Happy New Year, Andres!  (yay, I spelled it right this time! ;)

Heh, cheers!

> 
>> On Dec 20, 2012, at 11:04 AM, Tim Deegan <tim@xxxxxxx> wrote:
>> 
>>> I think the point Dan was trying to make is that if you use page-sharing
>>> to do overcommit, you can end up with the same problem that self-balloon
>>> has: guest activity might consume all your RAM while you're trying to
>>> build a new VM.
>>> 
>>> That could be fixed by a 'further hypervisor change' (constraining the
>>> total amount of free memory that CoW unsharing can consume).  I suspect
>>> that it can also be resolved by using d->max_pages on each shared-memory
>>> VM to put a limit on how much memory they can (severally) consume.
>> 
>> To be completely clear. I don't think we need a separate allocation/list
>> of pages/foo to absorb CoW hits. I think the solution is using d->max_pages.
>> Sharing will hit that limit and then send a notification via the "sharing"
>> (which is actually an enomem) men event ring.
> 
> And here is the very crux of our disagreement.
> 
> You say "I think the solution is using d->max_pages".  Unless
> I misunderstand completely, this means your model is what I've
> called the "Citrix model" (because Citrix DMC uses it), in which
> d->max_pages is dynamically adjusted regularly for each running
> guest based on external inferences by (what I have sarcastically
> called) a "omniscient toolstack".
> 
> In the Oracle model, d->max_pages is a fixed hard limit set when
> the guest is launched; only d->curr_pages dynamically varies across
> time (e.g. via in-guest self ballooning).
> 
> I reject the omnisicient toolstack model as unimplementable [1]
> and, without it, I think you either do need a separate allocation/list,
> with all the issues that entails, or you need the proposed
> XENMEM_claim_pages hypercall to resolve memory allocation races
> (i.e. vs domain creation).

That pretty much ends the discussion. If you ask me below to reason within the 
constraints your rejection places, then that's artificial reasoning. Your 
rejection seems to stem from philosophical reasons, rather than technical 
limitations.

Look, your hyper call doesn't kill kittens, so that's about as far as I will go 
in this discussion.

My purpose here was to a) dispel misconceptions about sharing b) see if 
something better comes out from a discussion between all interested mm parties. 
I'm satisfied insofar a).

Thanks
Andres
> 
> So, please Andres, assume for a moment you have neither "the
> solution using d->max_pages" nor "a separate allocation/list".
> IIUC if one uses your implementation of page-sharing when d->max_pages
> is permanently fixed, it is impossible for a "CoW hit" to result in
> exceeding d->max_pages; and so the _only_ time a CoW hit would
> result in a toolstack notification and/or host swapping is if
> physical memory in the machine is fully allocated.  True?
> 
> Now does it make more sense what I and Konrad (and now Tim)
> are trying to point out?
> 
> Thanks,
> Dan
> 
> [1] excerpted from my own email at:
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00107.html 
> 
>> The last 4+ years of my life have been built on the fundamental
>> assumption that nobody, not even one guest kernel itself,
>> can adequately predict when memory usage is going to spike.
>> Accurate inference from an external entity across potentially dozens
>> of VMs is IMHO.... well... um... unlikely.  I could be wrong
>> but I believe, even in academia, there is no realistic research
>> solution proposed for this.  (If I'm wrong, please send a pointer.)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.