[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-API] XCP 1.5 BETA BUG: Slow performance with 10Gbit network cards
Another discovery: On XCP 1.5 iperf on dom0 to localhost (loopback) Client connecting to localhost, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local 127.0.0.1 port 38278 connected with 127.0.0.1 port 5001 ^C[ 3] 0.0- 5.2 sec 1.79 GBytes 2.97 Gbits/sec Limited to 3gbit on xapi/debian root@kronos:~# iperf -c 10.50.50.111 ------------------------------------------------------------ Client connecting to 10.50.50.111, TCP port 5001 TCP window size: 167 KByte (default) ------------------------------------------------------------ [ 3] local 127.0.0.1 port 41137 connected with 127.0.0.1 port 5001[ ID] Interval Transfer Bandwidth[ 3] 0.0- 5.6 sec 13.1 GBytes 20.0 Gbits/sec clearly not a nic issue.... Cheers, Sébastien On 29.04.2012 17:28, Sébastien Riccio wrote: Me again (sorry),Still trying to sort things out to understand why it's slow under xcp, I've installed xcp on debian (project kronos), and did an iperf from dom0[ 3] 0.0- 1.0 sec 623 MBytes 5.23 Gbits/sec [ 3] 1.0- 2.0 sec 817 MBytes 6.85 Gbits/sec [ 3] 2.0- 3.0 sec 818 MBytes 6.86 Gbits/sec [ 3] 3.0- 4.0 sec 816 MBytes 6.84 Gbits/sec [ 3] 4.0- 5.0 sec 818 MBytes 6.86 Gbits/sec [ 3] 5.0- 6.0 sec 818 MBytes 6.86 Gbits/sec [ 3] 6.0- 7.0 sec 816 MBytes 6.85 Gbits/secIt's not getting to the 10gibt/s so it looks that it's related to the xen hypervisor.But it still 2x better than with xcp/xenserver. Any ideas how to improve this? Cheers, Sébastien On 29.04.2012 14:44, Sébastien Riccio wrote:Hi again, again :) Something strange :If i issue parallel threads with iperf (8 in this example, the 10Gbit/s speed is reached)[ 5] 0.0- 1.0 sec 151 MBytes 1.27 Gbits/sec [ 6] 0.0- 1.0 sec 178 MBytes 1.49 Gbits/sec [ 7] 0.0- 1.0 sec 136 MBytes 1.14 Gbits/sec [ 8] 0.0- 1.0 sec 189 MBytes 1.59 Gbits/sec [ 10] 0.0- 1.0 sec 161 MBytes 1.35 Gbits/sec [ 4] 0.0- 1.0 sec 124 MBytes 1.04 Gbits/sec [ 3] 0.0- 1.0 sec 178 MBytes 1.49 Gbits/sec [ 9] 0.0- 1.0 sec 74.8 MBytes 627 Mbits/sec [SUM] 0.0- 1.0 sec 1.16 GBytes 10.0 Gbits/sec So it's surely not a NIC/driver issue but maybe a limitation in dom0 ?It's really not efficient as we connect the iscsi storage to the filer from dom0, so it will always be a single thread, so no possibility to get the full 10Gbit bw for our storage.Is there a way to get rid of that (bug?) limitation ? Cheers, Sébastien On 29.04.2012 14:23, Sébastien Riccio wrote:Hi again,So I did my test with a debian 6 on the same hardware than I'm using to try the xcp 1.5.Speed to the filer via the 10gig nic: [ 3] 0.0- 8.0 sec 8.71 GBytes 9.41 Gbits/secI would tend to conclude that there is something wrong with the XCP/XenServer kernel and the 10gigs interfaces (at least broadcom).(Note that our Dell hardware is certified as supported by XenServer, so it seems not so true :/)modinfo for bnx2x on debian 6 filename: /lib/modules/2.6.32-5-amd64/kernel/drivers/net/bnx2x.ko firmware: bnx2x-e1h-5.0.21.0.fw firmware: bnx2x-e1-5.0.21.0.fw version: 1.52.1 license: GPL description: Broadcom NetXtreme II BCM57710/57711/57711E Driver author: Eliezer Tamir srcversion: 050FF749F80E43CB5873BC9 alias: pci:v000014E4d00001650sv*sd*bc*sc*i* alias: pci:v000014E4d0000164Fsv*sd*bc*sc*i* alias: pci:v000014E4d0000164Esv*sd*bc*sc*i* depends: mdio,libcrc32c vermagic: 2.6.32-5-amd64 SMP mod_unload modversionsparm: multi_mode: Multi queue mode (0 Disable; 1 Enable (default)) (int) parm: num_rx_queues: Number of Rx queues for multi_mode=1 (default is half number of CPUs) (int) parm: num_tx_queues: Number of Tx queues for multi_mode=1 (default is half number of CPUs) (int)parm: disable_tpa: Disable the TPA (LRO) feature (int) parm: int_mode: Force interrupt mode (1 INT#x; 2 MSI) (int) parm: dropless_fc: Pause on exhausted host ring (int) parm: poll: Use polling (for debug) (int) parm: mrrs: Force Max Read Req Size (0..3) (for debug) (int) parm: debug: Default debug msglevel (int) modinfo for bnx2x on xcp 1.5 [root@xen-blade13 ~]# modinfo bnx2xfilename: /lib/modules/2.6.32.12-0.7.1.xs1.4.90.530.170661xen/kernel/drivers/net/bnx2x.koversion: 1.62.17 license: GPLdescription: Broadcom NetXtreme II BCM57710/57711/57711E/57712/57712E Driverauthor: Eliezer Tamir srcversion: 13EA885EBE061121BAEC28D alias: pci:v000014E4d00001663sv*sd*bc*sc*i* alias: pci:v000014E4d00001662sv*sd*bc*sc*i* alias: pci:v000014E4d00001650sv*sd*bc*sc*i* alias: pci:v000014E4d0000164Fsv*sd*bc*sc*i* alias: pci:v000014E4d0000164Esv*sd*bc*sc*i* depends: mdiovermagic: 2.6.32.12-0.7.1.xs1.4.90.530.170661xen SMP mod_unload modversions Xen 686 parm: multi_mode: Multi queue mode (0 Disable; 1 Enable (default); 2 VLAN PRI; 3 E1HOV PRI; 4 IP DSCP) (int)parm: pri_map: Priority to HW queue mapping (int) parm: qs_per_cos: Number of queues per HW queue (int) parm: cos_min_rate: Weight for RR between HW queues (int)parm: num_queues: Number of queues for multi_mode=1 (default is as a number of CPUs) (int)parm: disable_iscsi_ooo: Disable iSCSI OOO support (int) parm: disable_tpa: Disable the TPA (LRO) feature (int)parm: int_mode: Force interrupt mode other than MSI-X (1 INT#x; 2 MSI) (int)parm: dropless_fc: Pause on exhausted host ring (int) parm: poll: Use polling (for debug) (int) parm: mrrs: Force Max Read Req Size (0..3) (for debug) (int) parm: debug: Default debug msglevel (int)Seems that the xcp 1.5 got a more recent version of the module. So that might not be the problem.Maybe another problem in the kernel ? Any ideas how I could debug this ? Cheers, Sébastien On 29.04.2012 13:23, Sébastien Riccio wrote:Hi,I'm currently playing with XCP 1.5 beta on a Dell Blade infrastructure that is linked to redundent filers with 10gbits NICs.The redundent 10gigs interfaces are not managed by xapi (i've told xapi to forget them) and are configured like on any normal linux box.They are connected to the same 10gig switch where the filers are connected to.Interfaces on the filers:03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe 03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe 04:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIeInterfaces on the blades:05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe 05:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIeThe problem is that after some testing with iperf i get to this conclusion:1) Between the filers the speed is 9.9Gbit/s.2) Between xcp 1.5 and both filer i get around 3Gbit/s which is quite poor 2) Between an xcp 1.1 and both filer i get around 3Gbit/s which is quite poor (so that's not a new 1.5 bug)3) Between an xcp 1.5 and another xcp (1.1) i get too around 3Gbit/s.Do you have any ideas what could be the problem ? knowing that between the filers the speed is good and it's passing through the same switch.Is there a DDK VM available for XCP 1.5 beta? I would like to try to build the latest Broadcom drivers for these nics (bnx2x) for the kernel used on xcp 1.5...In the meantime I will try to install a debian on another blade to check if the speed are correct with the filers.Thanks, Sébastien _______________________________________________ Xen-api mailing list Xen-api@xxxxxxxxxxxxx http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api_______________________________________________ Xen-api mailing list Xen-api@xxxxxxxxxxxxx http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api_______________________________________________ Xen-api mailing list Xen-api@xxxxxxxxxxxxx http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api_______________________________________________ Xen-api mailing list Xen-api@xxxxxxxxxxxxx http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api _______________________________________________ Xen-api mailing list Xen-api@xxxxxxxxxxxxx http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |