[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning



Jan Beulich wrote:
On 10.12.13 at 15:01, James Dingwall <james.dingwall@xxxxxxxxxxx> wrote:
Jan Beulich wrote:
On 09.12.13 at 18:50, James Dingwall <james.dingwall@xxxxxxxxxxx> wrote:
Since 3.11 I have noticed that the OOM killer quite frequently triggers
in my Xen guest domains which use ballooning to increase/decrease their
memory allocation according to their requirements.  One example domain I
have has a maximum memory setting of ~1.5Gb but it usually idles at
~300Mb, it is also configured with 2Gb swap which is almost 100% free.

# free
                total       used       free     shared    buffers cached
Mem:        272080     248108      23972          0 1448      63064
-/+ buffers/cache:     183596      88484
Swap:      2097148          8    2097140

There is plenty of available free memory in the hypervisor to balloon to
the maximum size:
# xl info | grep free_mem
free_memory            : 14923
But you don't tell us how you trigger the process of re-filling
memory. Yet that's the crucial point here; the fact that there is
enough memory available in the hypervisor is only a necessary
prerequisite.
My guest systems are Gentoo based and most often the problems happen
while running emerge (Python script).  The example trace was taken from
an emerge command launched via a cron script which runs emerge --sync ;
emerge --deep --newuse -p -u world.  Almost all the other times I have
seen the behaviour have been during emerge --deep --newuse -u world
runs, most often during the build of large packages such as kdelibs,
seamonkey or libreoffice.  However occasionally I have seen it with
smaller builds or during the package merge phase where files are indexed
and copied in to the live filesystem.
I don't think I understand what you're trying to tell me, or in what
way this answers the question.

I have done some testing with the program below which successfully fills
all memory and swap before being killed.  One thought I had was that
perhaps there was some issue around a balloon down/up when the rate at
which memory is being requested and released is high.  I will try
another program with a random pattern of malloc/free to see if I can
make a test case to help with a bisect.
Again - the question is not how to drive your system out of
memory, but what entity (if any) it is that is supposed to react on
memory becoming tight and triggering the balloon driver to re-
populate pages. One such thing could be xenballoond (which is
gone in the unstable tree, i.e. what is to become 4.4).

Jan
Sorry, I misunderstood the question. I'm just relying on the built in kernel behaviour to balloon up/down as and when memory is required I wasn't aware there were multiple ways that this could be achieved. I'm not running anything in user space, so does tmem answer the question? When I modprobe tmem the kernel logs:

[ 18.518714] xen:tmem: frontswap enabled, RAM provided by Xen Transcendent Memory [ 18.518739] xen:tmem: cleancache enabled, RAM provided by Xen Transcendent Memory
[   18.518741] xen_selfballoon: Initializing Xen selfballooning driver
[   18.518742] xen_selfballoon: Initializing frontswap selfshrinking driver

The kernel command line for dom0 and domU contains tmem, the command line for xen contains tmem tmem_dedup=on tmem_compress=on

James



--

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black <http://www.twitter.com/zynstra>linkedin-black <http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales (registered number 07864369). Our registered office is 5 New Street Square, London, EC4A 3TW and our headquarters are at Bath Ventures, Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.