[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Self-ballooning question / cache issue



> From: Jana Saout [mailto:jana@xxxxxxxx]
> Subject: [Xen-devel] Self-ballooning question / cache issue
> 
> Hello,
> 
> I have been testing autoballooning on a production Xen system today
> (with cleancache + frontswap on Xen-provided tmem).  For most of the
> idle or CPU-centric VMs it seems to work just fine.
> 
> However, on one of the web-serving VMs, there is also a cron job running
> every few minutes which runs over a rather large directory (plus, this
> directory is on OCFS2 so this is a rather time-consuming process).  Now,
> if the dcache/inode cache is large enough (which it was before, since
> the VM got allocated 4 GB and is only using 1-2 most of the time), this
> was not a problem.
> 
> Now, with self-ballooning, the memory gets reduced to somewhat between 1
> and 2 GB and after a few minutes the load is going through the ceiling.
> Jobs reading through said directories are piling up (stuck in D state,
> waiting for the FS).  And most of the time kswapd is spinning at 100%.
> If I deactivate self-ballooning and assign the VM 3 GB, everything goes
> back to normal after a few minutes. (and, "ls -l" on said directory is
> served from the cache again).
> 
> Now, I am aware that said problem is a self-made one.  The directory was
> not actually supposed to contain that many files and the next job not
> waiting for the previous job to terminate is cause for trouble - but
> still, I would consider this a possible regression since it seems
> self-ballooning is constantly thrashing the VM's caches.  Not all caches
> can be saved in cleancache.
> 
> What about an additional tunable: a user-specified amount of pages that
> is added on top of the computed target number of pages?  This way, one
> could manually reserve a bit more room for other types of caches. (in
> fact, I might try this myself, since it shouldn't be too hard to do so)
> 
> Any opinions on this?

Hi Jana --

Thanks for doing this analysis.  While your workload is a bit
unusual, I agree that you have exposed a problem that will need
to be resolved.  It was observed three years ago that the next
"frontend" for tmem could handle a cleancache-like mechanism
for the dcache.  Until now, I had thought that this was purely
optional and would yield only a small performance improvement.
But with your workload, I think the combination of the facts that
selfballooning is forcing out dcache entries and they aren't
being saved in tmem is resulting in the problem you are seeing.

I think the best solution for this will be a "cleandcache"
patch in the Linux guest... but given how long it has taken
to get cleancache and frontswap into the kernel (and the fact
that a working cleandcache patch doesn't even exist yet), I
wouldn't hold my breath ;-)  I will put it on the "to do"
list though.

Your idea of the tunable is interesting (and patches are always
welcome!) but I am skeptical that it will solve the problem
since I would guess the Linux kernel is shrinking dcache
proportional to the size of the page cache.  So adding more
RAM with your "user-specified amount of pages that is
added on top of the computed target number of pages",
the RAM will still be shared across all caches and only
some small portion of the added RAM will likely be used
for dcache.

However, if you have a chance to try it, I would be interested
in your findings.  Note that you already can set a
permanent floor for selfballooning ("min_usable_mb") or,
of course, just turn off selfballooning altogether.

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.