[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning



On 12/14/2013 12:59 AM, James Dingwall wrote:
> Bob Liu wrote:
>> On 12/12/2013 12:30 AM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote:
>>>>>> Konrad Rzeszutek Wilk wrote:
>>>>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Since 3.11 I have noticed that the OOM killer quite frequently
>>>>>>>> triggers in my Xen guest domains which use ballooning to
>>>>>>>> increase/decrease their memory allocation according to their
>>>>>>>> requirements.  One example domain I have has a maximum memory
>>>>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also
>>>>>>>> configured with 2Gb swap which is almost 100% free.
>>>>>>>>
>>>>>>>> # free
>>>>>>>>                total       used       free     shared    buffers
>>>>>>>> cached
>>>>>>>> Mem:        272080     248108      23972          0 1448      63064
>>>>>>>> -/+ buffers/cache:     183596      88484
>>>>>>>> Swap:      2097148          8    2097140
>>>>>>>>
>>>>>>>> There is plenty of available free memory in the hypervisor to
>>>>>>>> balloon to the maximum size:
>>>>>>>> # xl info | grep free_mem
>>>>>>>> free_memory            : 14923
>>>>>>>>
>>>>>>>> An example trace (they are always the same) from the oom killer in
>>>>>>>> 3.12 is added below.  So far I have not been able to reproduce this
>>>>>>>> at will so it is difficult to start bisecting it to see if a
>>>>>>>> particular change introduced this.  However it does seem that the
>>>>>>>> behaviour is wrong because a) ballooning could give the guest more
>>>>>>>> memory, b) there is lots of swap available which could be used as a
>>>>>>>> fallback.
>>>>> Keep in mind that swap with tmem is actually no more swap. Heh, that
>>>>> sounds odd -but basically pages that are destined for swap end up
>>>>> going in the tmem code which pipes them up to the hypervisor.
>>>>>
>>>>>>>> If other information could help or there are more tests that I
>>>>>>>> could
>>>>>>>> run then please let me know.
>>>>>>> I presume you have enabled 'tmem' both in the hypervisor and in
>>>>>>> the guest right?
>>>>>> Yes, domU and dom0 both have the tmem module loaded and  tmem
>>>>>> tmem_dedup=on tmem_compress=on is given on the xen command line.
>>>>> Excellent. The odd thing is that your swap is not used that much, but
>>>>> it should be (as that is part of what the self-balloon is suppose to
>>>>> do).
>>>>>
>>>>> Bob, you had a patch for the logic of how self-balloon is suppose
>>>>> to account for the slab - would this be relevant to this problem?
>>>>>
>>>> Perhaps, I have attached the patch.
>>>> James, could you please apply it and try your application again? You
>>>> have to rebuild the guest kernel.
>>>> Oh, and also take a look at whether frontswap is in use, you can check
>>>> it by watching "cat /sys/kernel/debug/frontswap/*".
>>> I have tested this patch with a workload where I have previously seen
>> Thank you so much.
>>
>>> failures and so far so good.  I'll try to keep a guest with it stressed
>>> to see if I do get any problems.  I don't know if it is expected but I
>>> did note that the system running with this patch + selfshrink has a
>>> kswapd0 run time of ~30mins.  A guest without it and selfshrink disabled
>> Could you run the test again with this patch but selfshrink disabled and
>> compare the run time of kswapd0?
> Here are the results against two vms with/without the patch.  They are
> running on the same dom0 and have comparable xen configs and were
> restarted at the same point.
> 

Sorry for the later response.

> With patch:
> 
> # uptime ; ps -ef | grep [k]swapd0
>  14:58:55 up  6:32,  1 user,  load average: 0.00, 0.01, 0.05
> root       310     2  0 08:26 ?        00:00:01 [kswapd0]
> ### BUILD GLIBC
> # ps -ef | grep [k]swapd0
> root       310     2  0 08:26 ?        00:00:16 [kswapd0]
> ### BUILD KDELIBS
> # ps -ef | grep [k]swapd0
> root       310     2  1 08:26 ?        00:09:15 [kswapd0]

Still have a longer kswapd run time, it's strange.
In theory, with this patch more memory will be reserved to guest os and
as a result it should have shorter kswapd time.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.