[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] QOS for NFS backed disks / VM lags with high IO?


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Nathan March <nathan@xxxxxx>
  • Date: Wed, 19 Oct 2011 16:33:03 -0700
  • Delivery-date: Wed, 19 Oct 2011 16:34:37 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gt.net; h=message-id:date :from:mime-version:to:subject:references:in-reply-to :content-type:content-transfer-encoding; q=dns; s=mail; b=B6NmDj Y8w9J6yzIGWhUlmMsXY6JhG9N8cJrdAXZY7wKYxhLJ/m0yEt2lan6vvJ/sl7IqKS 1pW7BhJmgr0PWh2PS5nO+pBVPjq26F6qVyphpUSW1GxP3Oh6nGzQyxZiDFX76aKJ /ktV4iG5pnbBnV0AdEk1qhapNH2RlzsImrUjA=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On 10/19/2011 3:51 PM, Nathan March wrote:
It doesn't affect other Dom0's and then netapp stats show low usage, so seems to be a nfs client issue.... nfsstat shows 0 retrans and I am using jumbo frames.

Mounting with: rw,nosuid,noatime,nfsvers=3,tcp,rsize=32768,wsize=32768,bg,hard,intr

Just to add, I've cranked up the wmem/rmem buffers considerably with no improvement:

net.core.rmem_default = 20971520
net.core.wmem_default = 20971520
net.core.rmem_max = 20971520
net.core.wmem_max = 20971520
net.ipv4.tcp_rmem = 4096 87380 20971520
net.ipv4.tcp_wmem = 4096 65536 20971520
net.ipv4.tcp_mem = 8388608 12582912 20971520
net.ipv4.udp_mem = 8388608 12582912 20971520

- Nathan

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.