 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Slow disk access on forwarded RAID array
 Thanks for fast response, Pau. Comments below. On venerdà 14 febbraio 2014 16:02:47, Roger Pau Monnà wrote: On 14/02/14 15:22, Mauro Condarelli wrote:Hi, I get a sluggish (600kb/s) read access on a Raid1 (mirror) disk array. Is this to be expected? What am I doing wrong? Configuration follows (if more details are needed I'm ready to provide them, just ask): Both Dom0 and DomU are fairly simple debian Wheezy installs. The server "real" hardware is not state-of-the-art anymore, but it is still a reasonably powerful machine: AMD Phenom(tm) II X6 1055T / 8Gb DDR3 RAM Setup was done following "beginner-guide" (after that I switched to xl toolchain). Dom0 has one plain disk (boot/root/LVM) and two RAID1 arrays (these were on two different machines and I rewired them to the server: Any hint/pointer welcomeWhat kind of performance do you get if you try to execute the same benchmark on the same LVM volume from Dom0? Uhm, I was not clear: LVM is used for DomU storage. I have no problem there (I think, I'll cross-check).Where I have problems is on the two RAID1 arrays that are accessed EXCLUSIVELY by DomU. As a matter of fact I do mount on Dom0 a share exported by fileserver if and when I need to access the Raid Arrays from Dom0 (see below). I will stop the fileserver to mount the arrays directly on Dom0. Done. I get the same (slow) performance: root@vmrunner:~# mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=856751,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=689548k,nr_inodes=861932,mode=755) /dev/disk/by-uuid/8a60bbca-1bd8-47e7-bc5b-8d59bb841404 on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,nr_inodes=861932) tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1379080k,nr_inodes=861932) /dev/sde1 on /boot type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) none on /sys/kernel/config type configfs (rw,relatime)binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime) xenfs on /proc/xen type xenfs (rw,relatime)fileserver:/srv/shares/Store on /mnt/fileserver/Store type nfs4 (rw,relatime,vers=4,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.7.101,minorversion=0,local_lock=none,addr=192.168.7.109) /dev/md126 on /mnt/a type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered) /dev/md127 on /mnt/b type ext3 (rw,relatime,errors=continue,barrier=1,data=ordered) root@vmrunner:~# cat fio.cfg [global] rw=randread size=128m [r1] directory=/mnt/a [r2] directory=/mnt/b root@vmrunner:~# fio fio.cfg r1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1 r2: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1 2.0.8 Starting 2 processes r1: Laying out IO file(s) (1 file(s) / 128MB) r2: Laying out IO file(s) (1 file(s) / 128MB)Jobs: 1 (f=1): [r_] [100.0% done] [1814K/0K /s] [453 /0 iops] [eta 00m:00s] 
r1: (groupid=0, jobs=1): err= 0: pid=31877
 read : io=131072KB, bw=679944 B/s, iops=166 , runt=197395msec
   clat (usec): min=99 , max=615955 , avg=6017.38, stdev=6239.59
    lat (usec): min=99 , max=615956 , avg=6018.41, stdev=6239.60
   clat percentiles (usec):
|  1.00th=[  111],  5.00th=[  116], 10.00th=[  123], 20.00th=[  
137],
    | 30.00th=[  155], 40.00th=[ 3984], 50.00th=[ 5728], 60.00th=[ 
7520],
    | 70.00th=[ 9280], 80.00th=[11072], 90.00th=[12864], 
95.00th=[13760],
    | 99.00th=[16064], 99.50th=[23424], 99.90th=[27264], 
99.95th=[30336],
    | 99.99th=[75264]
bw (KB/s)  : min=  140, max= 1808, per=50.03%, avg=664.40, 
stdev=111.51lat (usec) : 100=0.01%, 250=30.74%, 500=0.13%, 750=0.39%, 1000=0.02% lat (msec) : 2=0.20%, 4=8.60%, 10=33.76%, 20=25.44%, 50=0.68% lat (msec) : 100=0.02%, 250=0.01%, 750=0.01% cpu : usr=0.09%, sys=0.54%, ctx=32858, majf=0, minf=24IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
    issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0
r2: (groupid=0, jobs=1): err= 0: pid=31878
 read : io=131072KB, bw=790730 B/s, iops=193 , runt=169739msec
   clat (usec): min=82 , max=308033 , avg=5172.95, stdev=3873.22
    lat (usec): min=83 , max=308034 , avg=5174.02, stdev=3873.22
   clat percentiles (usec):
|  1.00th=[  163],  5.00th=[  173], 10.00th=[  181], 20.00th=[  
390],
    | 30.00th=[ 3216], 40.00th=[ 4256], 50.00th=[ 5344], 60.00th=[ 
6368],
    | 70.00th=[ 7456], 80.00th=[ 8512], 90.00th=[ 9536], 
95.00th=[10176],
    | 99.00th=[10816], 99.50th=[11328], 99.90th=[17024], 
99.95th=[19328],
    | 99.99th=[71168]
bw (KB/s)  : min=  326, max= 1432, per=57.81%, avg=767.71, 
stdev=73.50lat (usec) : 100=0.03%, 250=19.20%, 500=1.03%, 750=0.04%, 1000=0.04% lat (msec) : 2=1.21%, 4=15.79%, 10=56.67%, 20=5.94%, 50=0.02% lat (msec) : 100=0.02%, 500=0.01% cpu : usr=0.09%, sys=0.71%, ctx=33061, majf=0, minf=25IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
    issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=262144KB, aggrb=1328KB/s, minb=664KB/s, maxb=772KB/s, 
mint=169739msec, maxt=197395msecDisk stats (read/write):md126: ios=32738/10, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=16384/12, aggrmerge=0/2, aggrticks=98374/912, aggrin_queue=99278, aggrutil=90.58% sdh: ios=30024/12, merge=0/2, ticks=178536/280, in_queue=178800, util=90.58% sdi: ios=2744/12, merge=0/2, ticks=18212/1544, in_queue=19756, util=9.47% md127: ios=32691/47, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=16384/48, aggrmerge=0/4, aggrticks=84482/1622, aggrin_queue=86104, aggrutil=72.50% sdf: ios=27826/48, merge=0/4, ticks=142856/1152, in_queue=144008, util=72.50% sdg: ios=4942/48, merge=0/4, ticks=26108/2092, in_queue=28200, util=13.33% Also, which kernel version are you using (both Dom0/DomU)? There have been some improvements in Linux blkback/blkfront recently. Both are standard Debian Wheezy. Dom0: root@vmrunner:~# uname -a Linux vmrunner 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux DomU: root@fileserver-pv-guest:~# uname -aLinux fileserver-pv-guest 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux Roger. TiA Mauro _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users 
 
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |