[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] blkback disk I/O limit patch

  • To: xen-devel@xxxxxxxxxxxxx
  • From: Vasiliy Tolstov <v.tolstov@xxxxxxxxx>
  • Date: Thu, 31 Jan 2013 09:12:28 +0400
  • Delivery-date: Thu, 31 Jan 2013 05:13:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

Hello. For own needs i'm write simple blkback disk i/o limit patch,
that can limit disk i/o based on iops. I need xen based iops shaper
because of own storage architecture.
Our storages node provide disks via scst over infiniband network.
On xen nodes we via srp attach this disks. Each xen connects to 2
storages in same time and multipath provide failover.

Each disk contains LVM (not CLVM), for each virtual machine we create
PV disk. And via device mapper raid1 we create disk, used for domU. In
this case if one node failed VM works fine with one disk in raid1.

All works greate, but in this setup we can't use cgroups and dm-ioband.
Some times ago CFQ disk scheduler top working with BIO devices and
provide control only on buttom layer. (In our case we can use CFQ only
on srp disk, and shape i/o only for all clients on xen node).
dm-ioband work's unstable when the some domU have massive i/o (our
tests says that if domU have ext4 and have 20000 iops sometimes dom0
crashed, or disk coccupted. And with dm-ioband if one storage node
down sometimes we miss some data from disk. And dm-ioband can't
provide on the fly control of iops.

This patch tryes to solve own problems. May someone from xen team look
at it and says how code looks? What i need to change/rewrite? May be
sometime this can be used in main linux xen tree... (i hope).
This patch is only for phy devices. For blktap devices i speak with
Thanos Makatos (author of blktap3) and may be in future this
functionality may be added to blktap3..

Thank You.

Vasiliy Tolstov,
e-mail: v.tolstov@xxxxxxxxx
jabber: vase@xxxxxxxxx

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.