[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Very slow disk I/O performance in PV & PVHVM domU (Xen 4.1.2)



Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.

HVM DomU works well. However, with PV domU, or with PVHVM domU, the
disk I/O performance is severely slow. The following are the results
of dd executions in each domUs.

- HVM
$ dd if=/dev/zero of=disk.img count=512k bs=1k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s

- PV
$ if=/dev/zero of=disk.img count=512k bs=1k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s

And after copying, the responsiveness of PV domU becomes too slow to
use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
performance.

I configured my PV domU to use disk file image like this.

root = '/dev/xvda2 ro'
disk = [
    'file:/path/to/pv/images/disk.img,xvda2,w',
    'file:/path/to/pv/images/swap.img,xvda1,w',
]

My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
tried with an older linux kernel (2.6.39) but the result was same.)

In my own guess, there's some issue when using disk image with PV
block device driver in Xen 4.1.2. But I have to use disk file image
because of some other experimental issues.

Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
of code should I modify to fix this problem in Xen 4.1.2?

Thank you for any help you can provide.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.