[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Best Practices for PV Disk IO?
On Mon, Jul 20, 2009 at 7:25 PM, Jeff Sturm<jeff.sturm@xxxxxxxxxx> wrote: >> -----Original Message----- >> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users- >> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Christopher Chen >> Sent: Monday, July 20, 2009 8:26 PM >> To: xen-users@xxxxxxxxxxxxxxxxxxx >> Subject: [Xen-users] Best Practices for PV Disk IO? >> >> I was wondering if anyone's compiled a list of places to look to >> reduce Disk IO Latency for Xen PV DomUs. I've gotten reasonably >> acceptable performance from my setup (Dom0 as a iSCSI initiator, >> providing phy volumes to DomUs), at about 45MB/sec writes, and >> 80MB/sec reads (this is to a IET target running in blockio mode). > > For domU hosts, xenblk over phy: is the best I've found. I can get > 166MB/s read performance from domU with O_DIRECT and 1024k blocks. > > Smaller block sizes yield progressively lower throughput, presumably due > to read latency: > > 256k: 131MB/s > 64k: 71MB/s > 16k: 33MB/s > 4k: 10MB/s > > Running the same tests on dom0 against the same block device yields only > slightly faster throughput. > > If there's any additional magic to boost disk I/O under Xen, I'd like to > hear it too. I also pin my dom0 to an unused CPU so it is always > available. My shared block storage runs the AoE protocol over a pair of > 1GbE links. > > The good news is that there doesn't seem to be much I/O penalty imposed > by the hypervisor, so the domU hosts typically enjoy better disk I/O > than an inexpensive server with a pair of SATA disks, at far less cost > than the interconnects needed to couple a high-performance SAN to many > individual hosts. Overall, the performance seems like a win for Xen > virtualization. > > Jeff Jeff: That sounds about right. Those numbers I quoted were from a iozone latency test with 64k block sizes--80 is very close to your 71! I found that increasing readahead (to a point) really helps get me to 80MB/sec reads, and using a low nr_requests (in linux DomU) seems to influence the scheduler (cfq on the domU) to dispatch writes (up to 50MB/sec) faster, increasing write speed. Of course, on the Dom0, I see 110MB/sec writes and reads on the same block device at 64k. But yeah, I'd love to hear what other people are doing... Cheers! cc -- Chris Chen <muffaleta@xxxxxxxxx> "The fact that yours is better than anyone else's is not a guarantee that it's any good." -- Seen on a wall _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |