[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] QOS for NFS backed disks / VM lags with high IO?


  • To: Xen Users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Nathan March <nathan@xxxxxx>
  • Date: Wed, 19 Oct 2011 15:51:50 -0700
  • Delivery-date: Wed, 19 Oct 2011 15:52:50 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gt.net; h=message-id:date :from:mime-version:to:subject:content-type :content-transfer-encoding; q=dns; s=mail; b=ZDnNGlqbh9cJmFWpXim n8DLjzclGDrOpkdDa5dyrZu5b9FM4aLO6HZNMflZmkf2VXkHdlJdAccwPKC7mD9p OvAauBFbQWRVXXfRVKhPPfbMv9c2gmV8KuPT0DcxaPLe3HxX+69H7YDBTFq8TUwh 2TE2A+wW6qZtqMHfBdijGXBs=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

dm-ioband seems to be the generally accepted solution to implement disk bandwidth controls, but unless I'm misunderstanding it doesn't seem possible to use this for nfs backed disks (netapp filer). Can someone point me in the direction of what I'm looking for here? Haven't had any luck with google / lists to see what other people are using, is this even possible?

Currently I'm able to cause lag spikes on all the DomU's of a particular Dom0 by running bonnie++ inside one of them, I'd like to be able to put something in place that'll keep a single DomU from saturating the storage. Alternatively if anyone knows what I've failed to tune to avoid this happening at all, that would be appreciated =)

It doesn't affect other Dom0's and then netapp stats show low usage, so seems to be a nfs client issue.... nfsstat shows 0 retrans and I am using jumbo frames.

Mounting with: rw,nosuid,noatime,nfsvers=3,tcp,rsize=32768,wsize=32768,bg,hard,intr

Thanks,
Nathan

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.