|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Self-ballooning question / cache issue
Hello,
I have been testing autoballooning on a production Xen system today
(with cleancache + frontswap on Xen-provided tmem). For most of the
idle or CPU-centric VMs it seems to work just fine.
However, on one of the web-serving VMs, there is also a cron job running
every few minutes which runs over a rather large directory (plus, this
directory is on OCFS2 so this is a rather time-consuming process). Now,
if the dcache/inode cache is large enough (which it was before, since
the VM got allocated 4 GB and is only using 1-2 most of the time), this
was not a problem.
Now, with self-ballooning, the memory gets reduced to somewhat between 1
and 2 GB and after a few minutes the load is going through the ceiling.
Jobs reading through said directories are piling up (stuck in D state,
waiting for the FS). And most of the time kswapd is spinning at 100%.
If I deactivate self-ballooning and assign the VM 3 GB, everything goes
back to normal after a few minutes. (and, "ls -l" on said directory is
served from the cache again).
Now, I am aware that said problem is a self-made one. The directory was
not actually supposed to contain that many files and the next job not
waiting for the previous job to terminate is cause for trouble - but
still, I would consider this a possible regression since it seems
self-ballooning is constantly thrashing the VM's caches. Not all caches
can be saved in cleancache.
What about an additional tunable: a user-specified amount of pages that
is added on top of the computed target number of pages? This way, one
could manually reserve a bit more room for other types of caches. (in
fact, I might try this myself, since it shouldn't be too hard to do so)
Any opinions on this?
Thank you,
Jana
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |