[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk starvation between DomU's


  • To: Thanos Makatos <thanos.makatos@xxxxxxxxxx>
  • From: Wiebe Cazemier <wiebe@xxxxxxxxxxxx>
  • Date: Wed, 19 Jun 2013 17:02:56 +0200 (CEST)
  • Cc: xen-users@xxxxxxxxxxxxx
  • Delivery-date: Wed, 19 Jun 2013 15:03:54 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: sntHw+yzxcSzFuitBwG/fIiMFfnXdaY7z0TggAG4oICACbz9AIACt41QR8OB4sT9wkGE8MOzfjtP
  • Thread-topic: [Xen-users] Disk starvation between DomU's

----- Original Message -----
> From: "Thanos Makatos" <thanos.makatos@xxxxxxxxxx>
> To: "Wiebe Cazemier" <wiebe@xxxxxxxxxxxx>
> Cc: xen-users@xxxxxxxxxxxxx
> Sent: Wednesday, 19 June, 2013 4:28:47 PM
> Subject: RE: [Xen-users] Disk starvation between DomU's
> 
> > The DomU has 2 GB of RAM, no swap, 800 MB free (without cache).
> > The Dom0 has 512 MB of RAM (288 free without cache, 30 in use on
> > swap).
> > It's mem is limited with a boot param.
> > 
> > When I do this on the DomU:
> > 
> > dd if=/dev/zero of=bla2.img bs=1M count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 1048576000 bytes (1.0 GB) copied, 14.6285 s, 71.7 MB/s
> 
> You're generating 1 GB of I/O while domU's memory is 2 GB, so the
> entire workload fits in domU buffer cache. If you wait a bit longer
> after the dd has finished you should see in iostat 1 GB of I/O
> traffic, you can make this happen sooner by executing "sync" right
> after the dd.
> 
> I'd suggest increasing the number of blocks written (I'd say at least
> 2xRAM size) and/or use oflag=direct in dd.

Hmm. That didn't make a difference, but something else did. I was looking at 
the read column... Stupid mistake, but not so stupid as you might think. My eye 
was drawn to the changing figures, and now I see that write IO is always 0, 
according to iotop. When I do "iotop -oa" (accumulated, leave out non-active 
processes), all blkback processes that are appearing all accumulate 0 bytes 
written. I don't understand that...

> 
> > > What's the array's I/O scheduler? I think since it's a RAID
> > controller
> > > the "suggested" value is NOOP. If your backend is tapdisk, then
> > > CFQ
> > > *might* do the trick since each domU is served by a different
> > > tapdisk
> > > process (it may be the same with qemu).
> > 
> > The host has a 3Ware RAID6 array. Dom0 has CFQ, all DomU's have
> > noop.
> > Are you saying that when using a hardware RAID, the Dom0 should use
> > noop as well?
> 
> The RAID6 array should be present in dom0 as a block device, and IIUC
> on top of it you've created logical volumes (one per domU), is this
> correct? AFAIK the raid controller provides sufficient scheduling so
> it's usually suggested to use NOOP, however you could experiment
> setting the I/O scheduler of the RAI6 array block device to CFQ. I'm
> not sure what the I/O scheduler of /dev/xvd* should be in each domU,
> but I suspect it's irrelevant to the issue you're facing.
> 

That's correct. Currently, the RAID array is CFQ. I would seem weird to me to 
change that into noop. The RAID controller might schedule, but it can't receive 
instructions from the OS what should have priority. I'll look into it, though.

I do know that the recommended DomU scheduler is noop. It's also the default 
for all my machines without configuring it. I guess they know they're virtual.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.