[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] Xen 10GBit Ethernet network performance (was: Re: Experience with Xen & AMD Opteron 4200 series?)
Hey, On Mon, Feb 27, 2012 at 10:26 PM, Linus van Geuns <linus@xxxxxxxxxxxxx> wrote: > Hey, > > I am currently considering a two machine Xen setup based on AMD Opterons, > more precisely Dell R515s or R415s with dual Opteron 4274HE. > Does anyone has some Xen setup(s) running on top of Opterons 4274HE, Opteron > 4200 or Dell R515 machines and is willing to share some experience? Meanwhile, I got myself a two machine test setup for evaluation. 2 machines: - 1x Opteron 4274HE (8C) - 32GByte RAM - 2x 1GBit Eth on board - 1x Intel X520-DA2 (dual port 10GBit Ethernet) I installed Debian squeeze with Xen on both machines. "Debian kernel": 2.6.32-5-amd64 "Debian Xen kernel": 2.6.32-5-xen-amd64 "Debian bpo kernel": 3.2.0-0.bpo.2-amd64 "Debian Xen": 4.0.1 "Vanilla Xen": 4.1.2 TCP/IP transfer test: user@lx$ dd if=/dev/zero bs=1M count=40960 | nc -q 0 -v ly 16777 user@ty$ nc -l -v -p 16777 -q 0 | dd bs=1M of=/dev/null First, I am testing dom0 10GBit ethernet performance. This is pretty important to me for storage replication between cluster nodes. I found that between both boxes with Debian Kernel or Debian bpo kernel on bare hardware, I can transfer about 520 up to 550MByte/s. Between two dom0 instances, I get only 200 up to 250MByte/s. I also tried the same between a dom0 and a plain hardware instance and the other way around. I tried Debian Xen and vanilla Xen and also using Debian Xen kernel and Debian bpo kernel as a dom0 kernel. I did try setting the dom0 memory to various sizes and iommu=1 as well. Finally I retried most test scenarios with irqbalance enabled, as those Intel 10GE cards expose between 8 and 16 queses (IRQs) for TX/RX, depending on the kernel version. The result is always the same: I am limited to 250MByte/s max. unless both boxes run on bare hardware. Even Debian Xen kernel on bare hardware on both boxes does about 550MByte/s. When testing dom0 performance, no other domains (domUs) are running. So, this perfomance issue does seem to relate to communication between Xen hypervisor and Linux kernel. Any ideas on the issue or pointers on further testing? Regards, Linus _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |