[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] strange xen disk performance


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Jia Rao <rickenrao@xxxxxxxxx>
  • Date: Mon, 19 Jul 2010 16:13:39 -0400
  • Delivery-date: Mon, 19 Jul 2010 13:14:48 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=pfpoY4B1OsGaFBesFbAvXcoo9TpFrd0lpsYEZDpwyT64raRYLXR1irH4O7pOOU42i5 wBFi0n80/X6N9mjSuN2fb5lsxIws3aHd0/2Y6ZLi/z6dv9PyTffkLctpdHRkJbkal9Rg u3p4NLpGe+9bAALG+4xK/9Jj38le5bEtysmCU=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi all,

I observed some strange disk performance under Xen environment.

The test bed:
Dom0: 2.6.18.8, one cpu, pinned to a physical core, CFQ disk scheduler.
Xen: 3.3.1
Guest OS: 2.6.18.8, virtual disks are using physical partitions: phy:/dev/sdbX. one virtual cpu, each pinned to a separate core. 
Intel two quad-core xeon CPU

I did some tests on disk I/O performance when two VMs are sharing the physical host. 

both of the VMs are running iozone benchmark, sequential read a 1GB file with a request size 32K. The read was issued as O_DIRECT IO skipping the domU's buffer cache.

To get a reference throughput, I issued the read from each VM individually. It turned out to be a throughput of 56MB/s-61MB/s, which is the limit of the system for 32K sequential reads.

Then I issued the read from two VMs simultaneously. It turned out to be a 22MB/s-24MB/s for each VM. Combined together is around 45MB/s for the whole system, which is pretty normal.

The most strange thing was that, in order to make sure that the above results come from pure disk performance, nothing to to with buffer cache, I did the same test again but purge the cached data in domU and dom0 before the test.

sync and echo 3 > /proc/sys/vm/drop_caches were executed both in domU and dom0. The resulted throughput of each VM increased to 56MB/s-61MB/s, which generates a system throughput around 120MB/s. I ran the test for 10 times, 9 out of 10 have this strange result. Only one test looks like the original test.

It seems impossible to me and it must has something to do with caches.

My question is, do Xen or dom0 cache any disk I/O data from the guest OS? It seems dom0's buffer cache has nothing to do with this because I already purged everything,

Any ideas?

Thanks a lot.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.