[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC/PATCH v2] XENMEM_claim_pages (subop of existing) hypercall



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Thursday, November 15, 2012 5:39 AM
> To: Ian Campbell; Dan Magenheimer
> Cc: xen-devel@xxxxxxxxxxxxx; Dave McCracken; KonradWilk; Zhigang Wang; Keir 
> (Xen.org); Tim(Xen.org)
> Subject: Re: [Xen-devel] [RFC/PATCH v2] XENMEM_claim_pages (subop of 
> existing) hypercall
> 
> >>> On 15.11.12 at 13:25, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > Also doesn't this fail to make any sort of guarantee if you are building
> > a 32 bit PV guest, since they require memory under a certain host
> > address limit (160GB IIRC)?
> 
> This case is unreliable already, and has always been (I think we
> have a tools side hack in some of our trees in an attempt to deal
> with that), when ballooning is used to get at the memory, or
> when trying to start a 32-bit guest after having run 64-bit ones
> exhausting most of memory, and having terminated an early
> created one (as allocation is top down, ones created close to
> exhaustion, i.e. later, would eat up that "special" memory at
> lower addresses).
> 
> So this new functionality "only" makes a bad situation worse
> (which isn't meant to say I wouldn't prefer to see it get fixed).

Hmmm... I guess I don't see how claim makes the situation worse.
Well maybe a few microseconds worse.

Old model:
(1) Allocate a huge number of pages

New model:
(1) Claim a huge number of pages.  If successful...
(2) Allocate that huge number of pages

In either case, the failure conditions are the same
except that the claim mechanism checks one of the
failure conditions sooner.

Or am I misunderstanding?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.