[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] why do I get bad disk write performance in the kernel 3.1?



Konrad,

Thanks for help.

My windows.cfg is:
name='benchCM-windows-2003-64b-std-test'
kernel='/usr/lib/xen/boot/hvmloader'
builder='hvm'
memory=512
vcpus=2
pae=1
acpi=1
apic=1
disk=[ 'tap2:aio:/local-disk/benchCM-windows-2003-64b-std/xvda,xvda,w' ]
device_model='/usr/lib/xen/bin/qemu-dm'
boot='c'
sdl=0
vnc=1
vncunused=1
vnclisten='0.0.0.0'
vncpasswd=''
stdvga=0
extra=''
>>ramdisk=''
image_name=''
>bootloader=''
root=''
platform='xen'
network_mode='tap'
usb = 1
usbdevice = 'tablet'

The xvda disk is a raw disk, created with dd.
I use the windows.cfg in both kernel.



2011/11/9 Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
On Wed, Nov 09, 2011 at 12:58:27PM -0200, Roberto Scudeller wrote:
> Hi all,
>
> I'm testing the new kernel 3.1 from kernel.org, and xen 4.1.2-rc1-pre.
> I execute a CrystalMark IO Benchmark, in windows 2003 VM with gplpv
> drivers, and receive an amazing result for reading, but the write numbers
> are very disappointing.
>
> I run the VM in local disk, and receive these numbers:
> Seq read: 244 MB/s
> 512K read: 239 MB/s
> 4K read: 27 MB/s
> 4K QD 32: 90 MB/s
>
> Seq write: 20 MB/s
> 512K: 20 MB/s
> 4K: 8 MB/s
> 4K QD 32: 12MB/s
>
> In older kernel, 2.6.32.x:
> Seq read: 189 MB/s
> 512K read: 169 MB/s
> 4K read: 11 MB/s
> 4K QD 32: 34 MB/s
>
> Seq write: 177 MB/s
> 512K: 166 MB/s
> 4K: 12 MB/s
> 4K QD 32: 36 MB/s
>
> Could anyone help me. Why is the new kernel faster in reading, but too slow
> writing ?

It would help if you gave an idea of what the guest configuration is and
what is the backend. It might be that the backend you are using in 3.1 is
QEMU qdisk, not xen-blkback.



--
Roberto Scudeller


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.