[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] domain creation vs querying free memory (xend and xl)

On Oct 4, 2012, at 1:55 PM, Dan Magenheimer wrote:

>> From: Andres Lagar-Cavilla [mailto:andreslc@xxxxxxxxxxxxxx]
>> Subject: Re: [Xen-devel] domain creation vs querying free memory (xend and 
>> xl)
>> On Oct 4, 2012, at 1:18 PM, Dan Magenheimer wrote:
>>>> From: Andres Lagar-Cavilla [mailto:andreslc@xxxxxxxxxxxxxx]
>>>> Subject: Re: [Xen-devel] domain creation vs querying free memory (xend and 
>>>> xl)
>>>> On Oct 4, 2012, at 12:59 PM, Dan Magenheimer wrote:
>>> OK.  I _think_ the design I proposed helps in systems that are using
>>> page-sharing/host-swapping as well... I assume share-breaking just
>>> calls the normal hypervisor allocator interface to allocate a
>>> new page (if available)?  If you could review and comment on
>>> the design from a page-sharing/host-swapping perspective, I would
>>> appreciate it.
>> I think you will need to refine your notion of reservation. If you have 
>> nominal RAM N, and current RAM
>> C, N >= C, it makes no sense to reserve N so the VM later has room to occupy 
>> by swapping-in, unsharing
>> or whatever -- then you are not over-committing memory.
>> To the extent that you want to facilitate VM creation, it does make sense to 
>> reserve C and guarantee
>> that.
>> Then it gets mm-specific. PoD has one way of dealing with the allocation 
>> growth. xenpaging tries to
>> stick to the watermark -- if something swaps in something else swaps out. 
>> And uncooperative balloons
>> are be stymied by xapi using d->max_pages.
>> This is why I believe you need to solve the problem of initial reservation, 
>> and the problem of handing
>> off to the right actor. And then xl need not care any further.
>> Andres
> I think we may be saying the same thing, at least in the context
> of the issue I am trying to solve (which, admittedly, may be
> a smaller part of a bigger issue, and we should attempt to ensure
> that the solution to the smaller part is at least a step in the
> right direction for the bigger issue).  And I am trying to
> solve the mechanism problem only, not the policy which, I agree is
> mm-specific.
> The core problem, as I see it, is that there are multiple consumers of
> memory, some of which may be visible to xl and some of which are
> not.  Ultimately, the hypervisor is asked to provide memory
> and will return failure if it can't, so the hypervisor is the
> final arbiter.
> When a domain is created, we'd like to ensure there is enough memory
> for it to "not fail".  But when the toolstack asks for memory to
> create a domain, it asks for it "piecemeal".  I'll assume that
> the toolstack knows how much memory it needs to allocate to ensure
> the launch doesn't fail... my solution is that it asks for that
> entire amount of memory at once as a "reservation".  If the
> hypervisor has that much memory available, it returns success and
> must behave as if the memory has been already allocated.  Then,
> later, when the toolstack is happy that the domain did successfully
> launch, it says "remember that reservation? any memory reserved
> that has not yet been allocated, need no longer be reserved, you
> can unreserve it"
> In other words, between reservation and unreserve, there is no
> memory overcommit for that domain.  Once the toolstack does
> the unreserve, its memory is available for overcommit mechanisms.

I think that will be fragile. Suppose you have a 16 GiB domain and an 
overcommit mechanism that allows you to start the VM with 8 GiB. 
Straight-forward scenario with xen-4.2 and a combination of PoD and ballooning. 
Suppose you have 14GiB of RAM free in the system. Why should creation of that 
domain fail?


> Not sure if that part was clear: it's my intent that unreserve occur
> soon after the domain is launched, _not_, for example, when the domain
> is shut down.  What I don't know is if there is a suitable point
> in the launch when the toolstack knows it can do the "release"...
> that may be the sticking point and may be mm-specific.
> Thanks,
> Dan

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.