[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] large packet support in netfront driver and guest network throughput



On Tue, Sep 17, 2013 at 10:09:21AM +0800, annie li wrote:
[...]
> >>>>Is the packet on transfer from netback to net front is segmented into MTU 
> >>>>size? Is GRO not supported in the guest?
> >>>Here is what I see in the guest, iperf server running in guest and iperf
> >>>client running in Dom0. Tcpdump runs with the rune you provided.
> >>>
> >>>10.80.238.213.38895 > 10.80.239.197.5001: Flags [.], seq
> >>>5806480:5818064, ack 1, win 229, options [nop,nop,TS val 21968973 ecr
> >>>21832969], length 11584
> >>>
> >>>This is a upstream kernel. The throughput from Dom0 to DomU is ~7.2Gb/s.
> >>Thanks for your reply. The tcpdump was captured on dom0 of the guest [at 
> >>both vif and the physical interfaces] , i.e. on the receive path of the 
> >>server. iperf server was running on the guest (10.84.20.213) and the client 
> >>was at another guest (on a different server) with IP 10.84.20.214. The 
> >>traffic was between two guests, not between dom0 and the guest.
> >>
> >>>>I am seeing extremely low throughput on a 10Gb/s link. Two linux guests 
> >>>>(Centos 6.4 64bit, 4 VCPU and 4GB of memory) are running on two different 
> >>>>XenServer 6.1s and iperf session between them shows at most 3.2 Gbps.
> >>>XenServer might use different Dom0 kernel with their own tuning. You can
> >>>also try to contact XenServer support for better idea?
> >>>
> >>XenServer 6.1 is running 2.6.32.43 kernel. Since the issue is in netfront 
> >>driver, as it appears from the tcpdump, thats why I thought I post it here. 
> >>Note that checksum offloads of the interfaces (virtual and physical) were 
> >>not even touched, the default setting (which was set to on) was used.
> >>
> >>>In general, off-host communication can be affected by various things. It
> >>>would be quite useful to identify the bottleneck first.
> >>>
> >>>Try to run:
> >>>1. Dom0 to Dom0 iperf (or you workload)
> >>>2. Dom0 to DomU iperf
> >>>3. DomU to Dom0 iperf
> >>I tried dom0 to dom0 and I got 9.4 Gbps, which is what I expected (with GRO 
> >>turned on in the physical interface). However, when I run guest to guest, 
> >>things fall off. Is large packet not supported in netfront? I thought 
> >>otherwise. I looked at the code and I do not see any call to 
> >>napi_gro_receive(), rather it is using netif_receive_skb(). netback seems 
> >>to be sending GSO packets to the netfront, but it is being segmented to 
> >>1500 byte (as it appears from the tcpdump).
> >>
> >OK, I get your problem.
> >
> >Indeed netfront doesn't make use of GRO API at the moment.
> 
> This is true.
> But I am wondering why large packet is not segmented into mtu size
> with upstream kernel? I did see large packets with upsteam kernel on
> receive guest(test between 2 domus on same host).
> 

I think Anirban's setup is different. The traffic is from a DomU on
another host.

I will need to setup testing environment with 10G link to test this.

Anirban, can you share your setup, especially DomU kernel version, are
you using upstream kernel in DomU?

Wei.

> Thanks
> Annie
> >  I've added
> >this to my list to work on. I will keep you posted when I get to that.
> >
> >Thanks!
> >
> >Wei.
> >
> >>>In order to get line rate, you need to at least get line rate from Dom0
> >>>to Dom0 IMHO. 10G/s line rate from guest to guest has not yet been
> >>>achieved at the momentâ
> >>What is the current number, without VCPU pinning etc. for 1500 byte MTU? I 
> >>am getting 2.2-3.2 Gbps for 4VCPU guest with 4GB of memory. It is the only 
> >>vm running on that server without any other traffic.
> >>
> >>-Anirban
> >>
> >_______________________________________________
> >Xen-devel mailing list
> >Xen-devel@xxxxxxxxxxxxx
> >http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.