[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] tmem ephemeral page discarding



> From: Konrad Rzeszutek Wilk
> Sent: Monday, June 03, 2013 6:22 AM
> To: James Harper
> Cc: xen-devel@xxxxxxxxxxxxx; Dan Magenheimer
> Subject: Re: [Xen-devel] tmem ephemeral page discarding
> 
> On Sun, Jun 02, 2013 at 04:06:18AM +0000, James Harper wrote:
> > How does Xen manage ephemeral pages? Is there a way to manage the priority 
> > of pages in the ephemeral
> store? In my use case, I would prefer that an old page was automatically 
> discarded to make room for a
> new page, and that pages that have been retrieved already are more likely to 
> be discarded than a page
> that has not yet been retrieved.
> 
> In other words an LRU system with bounds. Right now the bounds are the amount 
> of memory available and
> when Xen needs more memory the bounds gets decreased - which will discard the 
> least used ephermeal
> pages.

It sounds like Xen tmem matches your use case.  Ephemeral pages that have been 
retrieved
are actually removed from tmem, i.e. ephemeral gets are "destructive", i.e. an 
ephemeral
get implies a flush.  (Note: This isn't true for shared ephemeral pages, but I 
don't
think you care about clustered filesystems so ignore that case for now.)

And ephemeral pages (as Konrad replied) are kept in a LRU queue across all
(tmem-aware) guests and the LRU ephemeral page across all guests is
discarded.  Or if a weight limit is exceeded, the LRU ephemeral
page belonging to the current guest is discarded.

> There is a tmem subop to actually set the cap (TMEMC_SET_CAP), which sets the 
> cap value. However
> I am not seeing in the code anybody using it. Hmmm

See discussion in "Weights and caps" in docs/misc/tmem-internals.html (or on 
the web at
https://oss.oracle.com/projects/tmem/dist/documentation/internals/xen4-internals-v01.html
 )

The current cap non-implementation is per-guest, but I never found
a good use model for it so never implemented it, outside of the
toolstack control interface.  Caps exist only because "weights+caps"
make sense for a CPU scheduler and I thought both might make sense
also for a "memory scheduler".

If you have a use model for it (or a use model for across-all-guests
cap), it shouldn't be hard to implement.

Hope that helps!
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.