[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] freemem-slack and large memory environments

On Tue, Feb 10, 2015 at 02:34:27PM -0700, Mike Latimer wrote:
> On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote:
> > While testing commit 2563bca1, I found that libxl_get_free_memory returns 0
> > until there is more free memory than required for freemem-slack. This means
> > that during the domain creation process, freed memory is first set aside for
> > freemem-slack, then marked as truly free for consumption.
> > 
> > On machines with large amounts of memory, freemem-slack can be very high
> > (26GB on a 2TB test machine). If freeing this memory takes more time than
> > allowed during domain startup, domain creation fails with ERROR_NOMEM.
> > (Commit 2563bca1 doesn't help here, as free_memkb remains 0 until
> > freemem-slack is satisfied.)
> > 
> > There is already a 15% limit on the size of freemem-slack (commit a39b5bc6),
> > but this does not take into consideration very large memory environments.
> > (26GB is only 1.2% of 2TB), where this limit is not hit.
> > 
> > It seems that there are two approaches to resolve this:
> > 
> >  - Introduce a hard limit on freemem-slack to avoid unnecessarily large
> > reservations
> >  - Increase the retry count during domain creation to ensure enough time is
> > set aside for any cycles spent freeing memory for freemem-slack (on the test
> > machine, doubling the retry count to 6 is the minimum required)
> > 
> > Which is the best approach (or did I miss something)?
> Sorry - forgot to CC relevant maintainers.

Oops, I replied to your other email before looking at this one. Sorry.
26GB out of 2TB is overkill IMHO. And the 15% limit dates back to 2010
which a) is just empirical, b) doesn't take into account large system.

I think we might be able to do both, introducing a hard limit as well as
tweaking the retry count (which function are you referring to?).


> -Mike

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.