[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



On Wed, Jan 18, 2012 at 11:35:35AM +0000, Jan Beulich wrote:
> >>> On 17.01.12 at 22:02, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> 
> >>> wrote:
> > The issue as I understand is that the DVB drivers allocate their buffers
> > from 0->4GB most (all the time?) so they never have to do bounce-buffering.
> > 
> > While the pv-ops one ends up quite frequently doing the bounce-buffering, 
> > which
> > implies that the DVB drivers end up allocating their buffers above the 4GB.
> > This means we end up spending some CPU time (in the guest) copying the 
> > memory
> > from >4GB to 0-4GB region (And vice-versa).
> 
> This reminds me of something (not sure what XenoLinux you use for
> comparison) - how are they allocating that memory? Not vmalloc_32()

I was using the 2.6.18, then the one I saw on Google for Gentoo, and now
I am going to look at the 2.6.38 from OpenSuSE.

> by chance (I remember having seen numerous uses under - iirc -
> drivers/media/)?
> 
> Obviously, vmalloc_32() and any GFP_DMA32 allocations do *not* do
> what their (driver) callers might expect in a PV guest (including the
> contiguity assumption for the latter, recalling that you earlier said
> you were able to see the problem after several guest starts), and I
> had put into our kernels an adjustment to make vmalloc_32() actually
> behave as expected.

Aaah.. The plot thickens! Let me look in the sources! Thanks for the
pointer.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.