[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] XEN - networking and performance


  • To: "admin@xxxxxxxxxxx" <admin@xxxxxxxxxxx>, 'xen-users' <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
  • Date: Thu, 6 Oct 2011 20:34:20 +0000
  • Accept-language: en-US
  • Cc:
  • Delivery-date: Thu, 06 Oct 2011 13:36:40 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AQHMhFAN5rOzJb1ggECVvcySiYiOp5Vv8jcA///SucA=
  • Thread-topic: [Xen-users] XEN - networking and performance

Yeah.  Disks in general are a bottleneck.

 

One of the traps we've run into when virtualizing moderately I/O-heavy hosts, is not sizing our disk arrays right.  Not in terms of capacity (terabytes) but in spindles.  If each physical host normally has 4 dedicated disks, for example, virtualizing 8 of these onto a domU attached to a disk array with 16 drives effectively cuts that ratio from 4:1 down to 2:1.  Latency goes up, throughput goes down.

 

I'm finding more and more than sharing CPU resources and memory isn't the problem, it's disk.  Maybe SSD will get really cheap and we can ditch the old mechanical drives.  Or at least I can hope.

 

Jeff

 

From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of admin@xxxxxxxxxxx
Sent: Thursday, October 06, 2011 3:13 PM
To: 'xen-users'
Subject: RE: [Xen-users] XEN - networking and performance

 

One thing I would suggest is using RAD10 instead of RAID5.  RAID5 is frequently a performance bottleneck. 

 

-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of fpt stl
Sent: Thursday, October 06, 2011 12:44 PM
To: xen-users
Subject: [Xen-users] XEN - networking and performance

 

Greetings all,

 

I would like some advice from people how are/were using  Xen 3.4.2 - it should be a rather stable release.

Dom0 is CentOS 5.5 64bit with Xen 3.4.2 installed with the default settings (minus cpu pinning to Dom0 and memory setup for 2GB).

There are 2 built-in nics (Broadcom) and an add-on network card (Intel) with another 4 nics.

Currently only one NIC is used for all network access, and as far as networking, the default setings are used - xend-config.sxp:

(network-script network-bridge)

(vif-script vif-bridge)

 

The questions are:

 

How can I improve the network performance (now all the VM are sharing one bridge):

 

a. creating multiple bridges and assigning a VM (DomU) per bridge

b. trying to hide the NICs from Dom0 using something like "pciback hide" - (pointers/example of how one would do this in Centos 5.5 would be highly appreciated...)

 

 

Also, I have noticed that sometimes - somehow erratical since I can not replicate - it seems that VMs are timing out: while editing in vi it does not respond anymore.

Nothing in the logs, of course. And no indication in top either. Also, copying 8 GB of data from one disk to another takes 50 (fifty) minutes !!! - both LVMs attached

separately to the DomU as two independent volumes [xvda - /dev/mapper/VG1-VM1 and xvdb - /dev/mapper/VG2-VM1_home].

 

Would it be recommended to have all the storage in one block device - one xvda only - which will have its own LVM structure opposed to multiple xvd's?

 

Any suggestions on improving the performance in accessing block devices?

 

I am somehow baffled since I have read that XEN is used by ISPs which probably host tens of DomUs on a host machine, and I am struggling to host 7 VMs on a dual quad Xenon box with 48 GB RAM and 3TB RAID5 15K disk storage!

 

Please be gentle since I am rather new to XEN.

 

Thanks,

 

Frank

 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.