[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Freenas domU network performance issue



Here is one iperf log for dom0 => nas (LRO=1) case.
It seems that the bench sucks at low BW while some occasional peak
shows up, raising the overall BW to the level of 300Mbps.

# iperf -c 192.168.1.9 -f m -i 1 -w 256k -l 256k -t 20
------------------------------------------------------------
Client connecting to 192.168.1.9, TCP port 5001
TCP window size: 0.25 MByte (WARNING: requested 0.25 MByte)
------------------------------------------------------------
[  3] local 192.168.1.6 port 39519 connected with 192.168.1.9 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   160 MBytes  1340 Mbits/sec
[  3]  1.0- 2.0 sec  5.75 MBytes  48.2 Mbits/sec
[  3]  2.0- 3.0 sec  9.00 MBytes  75.5 Mbits/sec
[  3]  3.0- 4.0 sec  6.00 MBytes  50.3 Mbits/sec
[  3]  4.0- 5.0 sec  6.00 MBytes  50.3 Mbits/sec
[  3]  5.0- 6.0 sec  8.25 MBytes  69.2 Mbits/sec
[  3]  6.0- 7.0 sec  9.25 MBytes  77.6 Mbits/sec
[  3]  7.0- 8.0 sec   161 MBytes  1351 Mbits/sec
[  3]  8.0- 9.0 sec  5.75 MBytes  48.2 Mbits/sec
[  3]  9.0-10.0 sec  6.00 MBytes  50.3 Mbits/sec
[  3] 10.0-11.0 sec  8.75 MBytes  73.4 Mbits/sec
[  3] 11.0-12.0 sec  56.0 MBytes   470 Mbits/sec
[  3] 12.0-13.0 sec   108 MBytes   906 Mbits/sec
[  3] 13.0-14.0 sec  11.0 MBytes  92.3 Mbits/sec
[  3] 14.0-15.0 sec  6.00 MBytes  50.3 Mbits/sec
[  3] 15.0-16.0 sec  9.25 MBytes  77.6 Mbits/sec
[  3] 16.0-17.0 sec  5.75 MBytes  48.2 Mbits/sec
[  3] 17.0-18.0 sec   152 MBytes  1275 Mbits/sec
[  3] 18.0-19.0 sec  6.00 MBytes  50.3 Mbits/sec
[  3] 19.0-20.0 sec  6.25 MBytes  52.4 Mbits/sec
[  3]  0.0-20.1 sec   746 MBytes   312 Mbits/sec

On Thu, Apr 4, 2013 at 8:00 PM, G.R. <firemeteor@xxxxxxxxxxxxxxxxxxxxx> wrote:
> Hi guys,
> I'm running a freenas domU (FreeBSD 8.3 based, ZFS v28, 2 vcpus mapped to
> the same HT capable core) to serve storage for all purpose including other
> domUs running on the same host. I did some study to understand how well it
> works and the result is kind of confusing.
>
> In summary, the network performance between domains on the same host is
> worse than expected. And NFS service to other domains on the same host is
> not reliable. The last issue make it impossible to serve VM disk image from
> the nas. Anybody has met the same issue?
>
> Benchmark details below:
>
> 1. iperf bench against external host (through physical network).
> The bench can fully utilize the Giga-bit network.
> ext  => nas ~930Mbps
> nas => ext  ~ 900Mbps
> TSO / LRO config has no impact on the bench result.
>
> 2. iperf bench against dom0 (through virtual bridge, handled fully by driver
> stack)
> The bench result highly depends on TSO / LRO config.
> dom0 => nas (LRO=1) 60Mbps ~ 300Mbps (vary a lot time to time, xentop
> reports low cpu usage in both domain (varies corresponding to BPS, ~ 20%)).
> dom0 => nas (LRO=0) 10900Mbps (stable, high cpu usage ~190%)
>
> nas => dom0 (TSO=1) 470Mbps (stable result, low cpu usage ~40%)
> nas => dom0 (TSO=0)  860Mbps (stable result, high cpu usage ~170%. some
> times can stable at ~2000Mbps, not sure how to reproduce)
>
> Similar result can be observed when talking to another linux domU. However,
> linux domU can generally achieve better performance than dom0.
>
> 3. As a comparasion, here is a iperf test between linux domU and dom0.
>  dom0 => lin domU ~10300Mbps
> lin domU => dom0  ~13500Mbps
>
> 4. file transfer from external host to nas
> PS: Here the bottleneck is the network rather than the disk.
> NFS: 50MB/s
> FTP: 102MB/s
> I'm not quite sure about how much perf to expect from NFS. But it appears to
> be a bottleneck.
>
> 5. file transfer from dom0 to nas
> copy to the NFS mount served by the nas domU can cause hang. Almost 100%
> reproduce.
> Typically the hang is limited to the nas and dom0 processes that accesses
> the mount.
> "umount -f" can resolve the hang in such case. But in some cases, all ssh
> sessions to dom0 hang too. Not sure if this means a dom0 network hang or a
> full freeze.
>
> Tcpdump seems to show that some traffic is lost. But there is no package
> drop statistics shown in both end from the netstat / ifconfig command.
>
> Thanks,
> Timothy

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.