[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: poor domU VBD performance.
peter bier wrote: I added a counter and incremented every time blkback daemon was woken up and ran the read test in domU. With 32k and 320k request sizes (o_direct), I consistently got 200 wake ups/second. I expected 100/second, the same interval as the minimum svc cmt times I am seeing, but anyway, 200/sec is way to low for small request sizes. I think this confirms the latency issue. Not sure yet why it cannot wake up more frequently.Ian Pratt <m+Ian.Pratt <at> cl.cam.ac.uk> writes:I'll check the xen block driver to see if there's anything else that sticks out.Jens AxboeJens, I'd really appreciate this. The blkfront/blkback drivers have rather evolved over time, and I don't think any of the core team fully understand the block-layer differencesbetween 2.4 and 2.6.There's also some junk left in there from when the backend was in Xen itself back in the days of 1.2, though Vincent has prepared a patch to clean this up and also make 'refreshing' of vbd's work (for size changes), and also allow the blkfront driver to import whole disks rather than paritions. We had this functionality on 2.4, but lost it in the move to 2.6. My bet is that it's the 2.6 backend that is where the true perofrmance bug lies. Using a 2.6 domU blkfront talking to a 2.4 dom0 blkback seems to give good performance under a wide variety of circumstances. Using a 2.6 dom0 is far more pernickety. I agree with Andrew that I suspect it's the work queue changes are biting us when we don't have many outstanding requests. Thanks, IanI have done my simple dd on hde1 with two different setting of readahead: 256 sectors and 512 sectors. -Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |