[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Subject: RE: Proposed new "memory capacity claim" hypercall/feature
> 
> >>> On 07.11.12 at 23:17, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> wrote:
> > It appears that the attempt to use 2MB and 1GB pages is done in
> > the toolstack, and if the hypervisor rejects it, toolstack tries
> > smaller pages.  Thus, if physical memory is highly fragmented
> > (few or no order>=9 allocations available), this will result
> > in one hypercall per 4k page so a 256GB domain would require
> > 64 million hypercalls.  And, since AFAICT, there is no sane
> > way to hold the heap_lock across even two hypercalls, speeding
> > up the in-hypervisor allocation path, by itself, will not solve
> > the TOCTOU race.
> 
> No, even in the absence of large pages, the tool stack will do 8M
> allocations, just without requesting them to be contiguous.

Rats, you are right (as usual).  My debug code was poorly
placed and missed this important point.

So ignore the huge-number-of-hypercalls point and I think we
return to:  What is an upper time bound for holding the heap_lock
and, for an arbitrary-sized domain in an arbitrarily-fragmented
system, can the page allocation code be made fast enough to
fit within that bound?

I am in agreement that if the page allocation code can be
fast enough so that the heap_lock can be held, this is a better
solution than "claim".  I am just skeptical that, in
the presence of those two "arbitraries", it is possible.

So I will proceed with more measurements before prototyping
the "claim" stuff.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.