[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Slow disk access on forwarded RAID array



On 14/02/14 15:22, Mauro Condarelli wrote:
> Hi,
> I get a sluggish (600kb/s) read access on a Raid1 (mirror) disk array.
> Is this to be expected?
> What am I doing wrong?
> 
> Configuration follows (if more details are needed I'm ready to provide
> them, just ask):
> 
> Both Dom0 and DomU are fairly simple debian Wheezy installs.
> 
> The server "real" hardware is not state-of-the-art anymore, but it is
> still a reasonably powerful machine: AMD Phenom(tm) II X6 1055T / 8Gb
> DDR3 RAM
> 
> Setup was done following "beginner-guide" (after that I switched to xl
> toolchain).
> 
> Dom0 has one plain disk (boot/root/LVM) and two RAID1 arrays (these were
> on two different machines and I rewired them to the server:
>> root@vmrunner:~# fdisk -l /dev/sde
>>
>> Disk /dev/sde: 320.1 GB, 320072933376 bytes
>> 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x000d11c8
>>
>>    Device Boot      Start         End      Blocks   Id  System
>> /dev/sde1   *        2048      585727      291840   83  Linux
>> /dev/sde2          585728    12304383     5859328   82  Linux swap /
>> Solaris
>> /dev/sde3        12304384    41601023    14648320   83  Linux
>> /dev/sde4        41601024   625141759   291770368   8e  Linux LVM
>> root@vmrunner:~# mdadm --detail --scan
>> ARRAY /dev/md127 metadata=0.90 UUID=075741b5:c25af231:bfe7d838:0da5cb4d
>> ARRAY /dev/md/store metadata=1.2 name=store
>> UUID=b277d0c1:0ade7e6a:d0139b97:ac1a295b
> 
> DomU configuration is straightforward:
>> bootloader = '/usr/lib/xen-4.1/bin/pygrub'
>> vcpus       = '2'
>> memory      = '512'
>> root        = '/dev/xvda2 ro'
>> disk        = [
>>                   'phy:/dev/vg0/fileserver-pv-guest-disk,xvda2,w',
>>                   'phy:/dev/vg0/fileserver-pv-guest-swap,xvda1,w',
>>                   'phy:/dev/md126,xvda3,w',
>>                   'phy:/dev/md127,xvda4,w'
>>               ]
>> name        = 'fileserver-pv-guest'
>> dhcp        = 'dhcp'
>> vif         = [ 'mac=00:16:3E:59:55:AD' ]
>> on_poweroff = 'destroy'
>> on_reboot   = 'restart'
>> on_crash    = 'restart'
> Notice I have a rather small memory (512Mb), but this DomU is going to
> be "just" a file server (nfs+cifs).
> 
> Also the configuration on DomU is quite straightforward:
>> root@fileserver-pv-guest:/usr/share/doc/fio# cat /etc/fstab
>> proc            /proc           proc    defaults        0       0
>> devpts          /dev/pts        devpts rw,noexec,nosuid,gid=5,mode=620
>> 0  0
>> /dev/xvda1 none swap sw 0 0
>> /dev/xvda2 / ext3 noatime,nodiratime,errors=remount-ro 0 1
>> /dev/xvda3 /srv/shares/Store ext4 noatime,nodiratime,errors=remount-ro
>> 0 2
>> /dev/xvda4 /srv/shares/Store/private ext4
>> noatime,nodiratime,errors=remount-ro 0 2
> But performance is NOT good:
>> root@fileserver-pv-guest:~# cat rendom-read-test.fio
>> ; random read of 128mb of data
>>
>> [random-read]
>> rw=randread
>> size=128m
>> directory=/srv/shares/Store/Store/tmp/
>> root@fileserver-pv-guest:~# fio rendom-read-test.fio
>> random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
>> 2.0.8
>> Starting 1 process
>> random-read: Laying out IO file(s) (1 file(s) / 128MB)
>> Jobs: 1 (f=1): [r] [100.0% done] [1704K/0K /s] [426 /0  iops] [eta
>> 00m:00s]
>> random-read: (groupid=0, jobs=1): err= 0: pid=4028
>>   read : io=131072KB, bw=677296 B/s, iops=165 , runt=198167msec
>>     clat (usec): min=118 , max=964702 , avg=6040.88, stdev=7552.44
>>      lat (usec): min=119 , max=964703 , avg=6041.92, stdev=7552.44
>>     clat percentiles (usec):
>>      |  1.00th=[  126],  5.00th=[  131], 10.00th=[  141], 20.00th=[ 
>> 151],
>>      | 30.00th=[  167], 40.00th=[ 3888], 50.00th=[ 5728], 60.00th=[
>> 7520],
>>      | 70.00th=[ 9280], 80.00th=[11072], 90.00th=[12864],
>> 95.00th=[13888],
>>      | 99.00th=[18048], 99.50th=[25984], 99.90th=[29824],
>> 99.95th=[33536],
>>      | 99.99th=[68096]
>>     bw (KB/s)  : min=  211, max= 1689, per=100.00%, avg=661.45,
>> stdev=108.43
>>     lat (usec) : 250=31.42%, 500=0.25%, 750=0.34%, 1000=0.07%
>>     lat (msec) : 2=0.22%, 4=8.33%, 10=33.08%, 20=25.44%, 50=0.84%
>>     lat (msec) : 100=0.01%, 250=0.01%, 1000=0.01%
>>   cpu          : usr=0.34%, sys=0.17%, ctx=32880, majf=0, minf=24
>>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>> >=64=0.0%
>>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >=64=0.0%
>>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >=64=0.0%
>>      issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0
>>
>> Run status group 0 (all jobs):
>>    READ: io=131072KB, aggrb=661KB/s, minb=661KB/s, maxb=661KB/s,
>> mint=198167msec, maxt=198167msec
>>
>> Disk stats (read/write):
>>   xvda3: ios=32853/6, merge=0/2, ticks=199448/5640, in_queue=205200,
>> util=99.82%
> 
> Any hint/pointer welcome

What kind of performance do you get if you try to execute the same
benchmark on the same LVM volume from Dom0?

Also, which kernel version are you using (both Dom0/DomU)? There have
been some improvements in Linux blkback/blkfront recently.

Roger.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.