[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] GPLPV Driver and SMP
Quoting "James Harper" <james.harper@xxxxxxxxxxxxxxxx>: [ 5] 0.0-10.0 sec 339 MBytes 284 Mbits/sec [ 4] 0.0-10.0 sec 344 KBytes 281 Kbits/sec As you can see the direction to the domU is fast, but the other way is a factor of 1000 slower.Yeah. That certainly is sucky performance :| Is that the same with and without SMP (I've just gotten back from a holiday and I have a cold so forgive me if you've already answered that question) I had now the time to test some things. All test now with pre13. First a non SMP guest on another host. That's fairly fast: [root@xen03 ~]# iperf -c mws-neu.bl-group.physik.uni-muenchen.de ------------------------------------------------------------ Client connecting to mws-neu.bl-group.physik.uni-muenchen.de, TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.160.92 port 40278 connected with 192.168.180.149 port 5001 [ 3] 0.0-10.0 sec 254 MBytes 213 Mbits/sec [root@xen03 ~]# virt-manager & [1] 23662 [root@xen03 ~]# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.160.92 port 5001 connected with 192.168.180.149 port 1065 [ 4] 0.0-10.2 sec 285 MBytes 235 Mbits/sec [root@xen03 ~]# iperf -c mws-neu.bl-group.physik.uni-muenchen.de ------------------------------------------------------------ Client connecting to mws-neu.bl-group.physik.uni-muenchen.de, TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.160.92 port 36917 connected with 192.168.180.149 port 5001 [ 3] 0.0-10.0 sec 641 MBytes 538 Mbits/sec[root@xen03 ~]# [root@xen03 ~]# iperf -c mws-neu.bl-group.physik.uni-muenchen.de -P 4 write2 failed: Connection reset by peer ------------------------------------------------------------ Client connecting to mws-neu.bl-group.physik.uni-muenchen.de, TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 6] local 192.168.160.92 port 60092 connected with 192.168.180.149 port 5001 [ 6] 0.0- 0.0 sec 1.00 MBytes 1.91 Gbits/sec [ 3] local 192.168.160.92 port 60089 connected with 192.168.180.149 port 5001 [ 4] local 192.168.160.92 port 60090 connected with 192.168.180.149 port 5001 [ 5] local 192.168.160.92 port 60091 connected with 192.168.180.149 port 5001 [ 4] 0.0-10.0 sec 310 MBytes 260 Mbits/sec [ 3] 0.0-10.0 sec 237 MBytes 199 Mbits/sec [ 5] 0.0-10.1 sec 241 MBytes 201 Mbits/sec [SUM] 0.0-10.1 sec 790 MBytes 659 Mbits/sec [root@xen03 ~]# [root@xen03 ~]# iperf -s -P 4 ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.160.92 port 5001 connected with 192.168.180.149 port 1071 [ 5] local 192.168.160.92 port 5001 connected with 192.168.180.149 port 1072 [ 6] local 192.168.160.92 port 5001 connected with 192.168.180.149 port 1073 [ 7] local 192.168.160.92 port 5001 connected with 192.168.180.149 port 1074 [ 4] 0.0-10.0 sec 178 MBytes 149 Mbits/sec [ 5] 0.0-10.0 sec 183 MBytes 153 Mbits/sec [ 7] 0.0-10.0 sec 173 MBytes 145 Mbits/sec [ 6] 0.0-10.0 sec 173 MBytes 145 Mbits/sec [SUM] 0.0-10.0 sec 707 MBytes 592 Mbits/sec [root@xen03 ~]#Ok, linux PVM Guest are much faster ( between 1 - 2 GBit /s), and that even with copying receive path. Yep, that's it. The first dom0 I tested the SMP machine has TCP segmentation offload disabled (probably by the BIOS):Can you turn off Large Send Offload in the Advanced properties of the network adapter in device manager? [root@xen02 ~]# ethtool -k peth0 Offload parameters for peth0: Cannot get device udp large send offload settings: Operation not supported rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: off udp fragmentation offload: off generic segmentation offload: off [root@xen02 ~]#When i start the SMP guest on another machine with TSO enabled I get fast transfers (around 200 - 250 Mbit/s). I then switch off Large Send Offload in the windows driver an started the domU again on the original host (the other one has faster CPU's so the numbers are not comparable), and voila, I get fast transfers also: [root@xen02 ~]# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.160.91 port 5001 connected with 192.168.180.199 port 1077 [ 4] 0.0-10.0 sec 198 MBytes 166 Mbits/sec [ 5] local 192.168.160.91 port 5001 connected with 192.168.180.199 port 1078 [ 5] 0.0-10.0 sec 195 MBytes 163 Mbits/sec [root@xen02 ~]# iperf -s -P 4 ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.160.91 port 5001 connected with 192.168.180.199 port 1079 [ 5] local 192.168.160.91 port 5001 connected with 192.168.180.199 port 1080 [ 6] local 192.168.160.91 port 5001 connected with 192.168.180.199 port 1081 [ 7] local 192.168.160.91 port 5001 connected with 192.168.180.199 port 1082 [ 5] 0.0-10.0 sec 111 MBytes 92.7 Mbits/sec [ 6] 0.0-10.0 sec 110 MBytes 91.8 Mbits/sec [ 7] 0.0-10.0 sec 109 MBytes 90.9 Mbits/sec [ 4] 0.0-10.0 sec 109 MBytes 91.4 Mbits/sec [SUM] 0.0-10.0 sec 438 MBytes 366 Mbits/sec [root@xen02 ~]#So as a conclusion: Its very important to match the Large Send Offload setting to the setting of the dom0. Sincerly, Klaus -- Klaus Steinberger Beschleunigerlaboratorium Phone: (+49 89)289 14287 Am Coulombwall 6, D-85748 Garching, Germany FAX: (+49 89)289 14280 EMail: Klaus.Steinberger@xxxxxxxxxxxxxxxxxxxxxx URL: http://www.physik.uni-muenchen.de/~Klaus.Steinberger/ ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. Attachment:
smime.p7s _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |