[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] can not use all available memory



On Mon, 2012-11-26 at 21:47 +0000, Dan Magenheimer wrote:
> > From: Tim Deegan [mailto:tim@xxxxxxx]
> > Subject: Re: [Xen-devel] can not use all available memory
> > 
> > At 12:37 -0800 on 26 Nov (1353933449), Dan Magenheimer wrote:
> > > > > I could be wrong (and I am confident someone will correct me if I am) 
> > > > > but
> > > > > I think this is because the Citrix memory model assumes there is an
> > > > > inference-driven policy engine for load-balancing memory across 
> > > > > competing
> > > > > virtual machines ("squeezed").  I suspect squeezed returns unallocated
> > > > > xen "free" memory to dom0.
> > >
> > > I forgot... it is called Dynamic Memory Control (DMC), not squeezed
> > > in the XenServer product.
> > 
> > AFAIK XenServer uses dom0_mem= and doesn't balloon com0 after boot time.
> > The idea of ballooning all free memory into dom0 is a xl-ism, inherited
> > from xend, and not really a "Citrix" one.  It's useful if you've
> > installed xen on a machine where dom0 is otherwise your main OS, but not
> > particularly for a dedicated platform.
> 
> "inherited from xend"... was the autoballoon default the same in xend?
> I don't recall ever turning it off manually and, when testing tmem,
> I'm sure I would have had to.  Or maybe xend did use hypervisor free
> memory before trying to autoballoon dom0?

http://wiki.xen.org/wiki/XenBestPractices#Xen_dom0_dedicated_memory_and_preventing_dom0_memory_ballooning
indicates that you should have been using "(enable-dom0-ballooning no)"
when using dom0_mem= with xend. Perhaps the failure case is not so bad
with xend as with xl though, perhaps because xend has a central daemon
which lets it make different choices.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.