[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk starvation between DomU's

  • To: Wiebe Cazemier <wiebe@xxxxxxxxxxxx>
  • From: Thanos Makatos <thanos.makatos@xxxxxxxxxx>
  • Date: Wed, 19 Jun 2013 14:28:47 +0000
  • Accept-language: en-GB, en-US
  • Cc: "xen-users@xxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxx>
  • Delivery-date: Wed, 19 Jun 2013 14:29:31 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: sntHw+yzxcSzFuitBwG/fIiMFfnXdaY7z0TggAG4oICACbz9AIACt41QR8OB4sT9wkGE8A==
  • Thread-topic: [Xen-users] Disk starvation between DomU's

> The DomU has 2 GB of RAM, no swap, 800 MB free (without cache).
> The Dom0 has 512 MB of RAM (288 free without cache, 30 in use on swap).
> It's mem is limited with a boot param.
> When I do this on the DomU:
> dd if=/dev/zero of=bla2.img bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 14.6285 s, 71.7 MB/s

You're generating 1 GB of I/O while domU's memory is 2 GB, so the entire 
workload fits in domU buffer cache. If you wait a bit longer after the dd has 
finished you should see in iostat 1 GB of I/O traffic, you can make this happen 
sooner by executing "sync" right after the dd.

I'd suggest increasing the number of blocks written (I'd say at least 2xRAM 
size) and/or use oflag=direct in dd.

> > What's the array's I/O scheduler? I think since it's a RAID
> controller
> > the "suggested" value is NOOP. If your backend is tapdisk, then CFQ
> > *might* do the trick since each domU is served by a different tapdisk
> > process (it may be the same with qemu).
> The host has a 3Ware RAID6 array. Dom0 has CFQ, all DomU's have noop.
> Are you saying that when using a hardware RAID, the Dom0 should use
> noop as well?

The RAID6 array should be present in dom0 as a block device, and IIUC on top of 
it you've created logical volumes (one per domU), is this correct? AFAIK the 
raid controller provides sufficient scheduling so it's usually suggested to use 
NOOP, however you could experiment setting the I/O scheduler of the RAI6 array 
block device to CFQ. I'm not sure what the I/O scheduler of /dev/xvd* should be 
in each domU, but I suspect it's irrelevant to the issue you're facing.
Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.