[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk starvation between DomU's


  • To: Thanos Makatos <thanos.makatos@xxxxxxxxxx>
  • From: Wiebe Cazemier <wiebe@xxxxxxxxxxxx>
  • Date: Wed, 19 Jun 2013 14:06:41 +0200 (CEST)
  • Cc: xen-users@xxxxxxxxxxxxx
  • Delivery-date: Wed, 19 Jun 2013 12:07:51 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: sntHw+yzxcSzFuitBwG/fIiMFfnXdaY7z0TggAG4oICACbz9AIACt41QR8OB4sQ=
  • Thread-topic: [Xen-users] Disk starvation between DomU's

----- Original Message -----
> From: "Thanos Makatos" <thanos.makatos@xxxxxxxxxx>
> To: "Wiebe Cazemier" <wiebe@xxxxxxxxxxxx>, "Ian Campbell" 
> <Ian.Campbell@xxxxxxxxxx>
> Cc: xen-users@xxxxxxxxxxxxx
> Sent: Wednesday, 19 June, 2013 10:53:52 AM
> Subject: RE: [Xen-users] Disk starvation between DomU's
> > 
> > That's what I ended up doing. After first having a certain Domu
> > "best
> > effort, 0", I now put it in the real-time class, with prio 3. I
> > can't
> > say I notice any 'real-time' performance now. It still hangs
> > occasionally.
> 
> I'm not sure whether this will work. AFAIK actual I/O is performed by
> tapdisk/qemu, so could you experiment with that instead? Also, keep
> in mind that there is CPU processing in the data path, so have a
> look at the dom0 CPU usage when executing the I/O test.

Tapdisk? I use the phy backend, with the DomU being on a logical volume. I 
don't even have processes with tap or qemu in their name.

As for the CPU usage; see below.

> 
> > 
> > Additionally, when I do the following on the virtual machine in
> > question:
> > 
> > dd if=/dev/zero of=dummy bs=1M
> > 
> > I hardly see any disk activity on the Dom0 with iostat. I see the
> > blkback popping up occasionally with a few kb/s, but I would expect
> > tens of MB's per second. The file 'dummy' is several GB's big in a
> > short while, so it does write.
> > 
> > Why don't I see the traffic popping up in iostat on the Dom0?
> 
> This is inexplicable. Either you've found a bug, or there's something
> wrong in the I/O test. Could you post more details? (E.g. total I/O
> performed, domU memory size, dom0 memory size, average CPU usage,
> etc.)

I have a DomU with ID 9 in "xm list", and the processes "[blkback.9.xvda2]" and 
"[blkback.9.xvda1]" have RT/3 priority. 

The DomU has 2 GB of RAM, no swap, 800 MB free (without cache). 
The Dom0 has 512 MB of RAM (288 free without cache, 30 in use on swap). It's 
mem is limited with a boot param.

When I do this on the DomU:

dd if=/dev/zero of=bla2.img bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 14.6285 s, 71.7 MB/s

I see the [blkback.9.xvda2] popping up at the top of "iotop" on the Dom0, 
hanging between 50 and 300 kB/s. Nowhere near the 70 MB/s. There is hardly any 
other process performing IO.

"iostat 2" does show a high blocks/s count for its logical volume, dm-4.

The Dom0 uses about 30% CPU according to "xm top" while dd'ing. It has 4 cores 
available.

> 
> What's the array's I/O scheduler? I think since it's a RAID
> controller the "suggested" value is NOOP. If your backend is
> tapdisk, then CFQ *might* do the trick since each domU is served by
> a different tapdisk process (it may be the same with qemu).

The host has a 3Ware RAID6 array. Dom0 has CFQ, all DomU's have noop. Are you 
saying that when using a hardware RAID, the Dom0 should use noop as well?


Specs
Debian 6
Linux 2.6.32-5-xen-amd64
xen-hypervisor-4.0-amd64: 4.0.1-5.8
CPU: Intel(R) Xeon(R) CPU           X3430  @ 2.40GHz
16 GB RAM

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.