[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blkback disk I/O limit patch


  • To: xen-devel@xxxxxxxxxxxxx
  • From: Vasiliy Tolstov <v.tolstov@xxxxxxxxx>
  • Date: Fri, 1 Feb 2013 14:59:46 +0400
  • Delivery-date: Fri, 01 Feb 2013 11:00:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

2013/1/31 Vasiliy Tolstov <v.tolstov@xxxxxxxxx>:
> Sorry forget to send patch
> https://bitbucket.org/go2clouds/patches/raw/master/xen_blkback_limit/3.6.9-1.patch
> Patch for kernel 3.6.9, but if that needed i can rebase it to current
> git Linus tree.

Patch based on work andrew.xu (
http://xen.1045712.n5.nabble.com/VM-disk-I-O-limit-patch-td4509813.html
)

--
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@xxxxxxxxx
jabber: vase@xxxxxxxxx

2013/1/31 Vasiliy Tolstov <v.tolstov@xxxxxxxxx>:
> Sorry forget to send patch
> https://bitbucket.org/go2clouds/patches/raw/master/xen_blkback_limit/3.6.9-1.patch
> Patch for kernel 3.6.9, but if that needed i can rebase it to current
> git Linus tree.
>
> 2013/1/31 Vasiliy Tolstov <v.tolstov@xxxxxxxxx>:
>> Hello. For own needs i'm write simple blkback disk i/o limit patch,
>> that can limit disk i/o based on iops. I need xen based iops shaper
>> because of own storage architecture.
>> Our storages node provide disks via scst over infiniband network.
>> On xen nodes we via srp attach this disks. Each xen connects to 2
>> storages in same time and multipath provide failover.
>>
>> Each disk contains LVM (not CLVM), for each virtual machine we create
>> PV disk. And via device mapper raid1 we create disk, used for domU. In
>> this case if one node failed VM works fine with one disk in raid1.
>>
>> All works greate, but in this setup we can't use cgroups and dm-ioband.
>> Some times ago CFQ disk scheduler top working with BIO devices and
>> provide control only on buttom layer. (In our case we can use CFQ only
>> on srp disk, and shape i/o only for all clients on xen node).
>> dm-ioband work's unstable when the some domU have massive i/o (our
>> tests says that if domU have ext4 and have 20000 iops sometimes dom0
>> crashed, or disk coccupted. And with dm-ioband if one storage node
>> down sometimes we miss some data from disk. And dm-ioband can't
>> provide on the fly control of iops.
>>
>> This patch tryes to solve own problems. May someone from xen team look
>> at it and says how code looks? What i need to change/rewrite? May be
>> sometime this can be used in main linux xen tree... (i hope).
>> This patch is only for phy devices. For blktap devices i speak with
>> Thanos Makatos (author of blktap3) and may be in future this
>> functionality may be added to blktap3..
>>
>> Thank You.
>>
>> --
>> Vasiliy Tolstov,
>> Clodo.ru
>> e-mail: v.tolstov@xxxxxxxxx
>> jabber: vase@xxxxxxxxx
>
>
>
> --
> Vasiliy Tolstov,
> Clodo.ru
> e-mail: v.tolstov@xxxxxxxxx
> jabber: vase@xxxxxxxxx



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@xxxxxxxxx
jabber: vase@xxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.