[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] large packet support in netfront driver and guest network throughput



On Tue, Sep 17, 2013 at 05:53:43PM +0000, Anirban Chakraborty wrote:
> 
> 
> On 9/17/13 1:25 AM, "Wei Liu" <wei.liu2@xxxxxxxxxx> wrote:
> 
> >On Tue, Sep 17, 2013 at 10:09:21AM +0800, annie li wrote:
> >><snip>
> >>>>I tried dom0 to dom0 and I got 9.4 Gbps, which is what I expected
> >>>>(with GRO turned on in the physical interface). However, when I run
> >>>>guest to guest, things fall off. Is large packet not supported in
> >>>>netfront? I thought otherwise. I looked at the code and I do not see
> >>>>any call to napi_gro_receive(), rather it is using
> >>>>netif_receive_skb(). netback seems to be sending GSO packets to the
> >>>>netfront, but it is being segmented to 1500 byte (as it appears from
> >>>>the tcpdump).
> >> >>
> >> >OK, I get your problem.
> >> >
> >> >Indeed netfront doesn't make use of GRO API at the moment.
> >> 
> >> This is true.
> >> But I am wondering why large packet is not segmented into mtu size
> >> with upstream kernel? I did see large packets with upsteam kernel on
> >> receive guest(test between 2 domus on same host).
> >> 
> >
> >I think Anirban's setup is different. The traffic is from a DomU on
> >another host.
> >
> >I will need to setup testing environment with 10G link to test this.
> >
> >Anirban, can you share your setup, especially DomU kernel version, are
> >you using upstream kernel in DomU?
> 
> Sure..
> I have two hosts, say h1 and h2 running XenServer 6.1.
> h1 running Centos 6.4, 64bit kernel, say guest1 and h2 running identical
> guest, guest2.
> 

Do you have exact version of your DomU' kernel? Is it available
somewhere online?

> iperf server is running on guest1 with iperf client connecting from guest2.
> 
> I haven't tried with upstream kernel yet. However, what I found out is
> that the netback on the receiving host is transmitting GSO segments to the
> guest (guest1), but the packets are segmented at the netfront interface.
> 

I just tried, with vanilla upstream kernel I can see large packet size
on DomU's side.

I also tried to convert netfront to use GRO API (hopefully I didn't get
it wrong), I didn't see much improvement -- it's quite obvious because I
already saw large packet even without GRO.

If you fancy trying GRO API, see attached patch. Note that you might
need to do some contextual adjustment as this patch is for upstream
kernel.

Wei.

---8<---
From ca532dd11d7b8f5f8ce9d2b8043dd974d9587cb0 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@xxxxxxxxxx>
Date: Wed, 18 Sep 2013 16:46:23 +0100
Subject: [PATCH] xen-netfront: convert to GRO API

Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
---
 drivers/net/xen-netfront.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 36808bf..dd1011e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -952,7 +952,7 @@ static int handle_incoming_queue(struct net_device *dev,
                u64_stats_update_end(&stats->syncp);
 
                /* Pass it up. */
-               netif_receive_skb(skb);
+               napi_gro_receive(&np->napi, skb);
        }
 
        return packets_dropped;
@@ -1051,6 +1051,8 @@ err:
        if (work_done < budget) {
                int more_to_do = 0;
 
+               napi_gro_flush(napi, false);
+
                local_irq_save(flags);
 
                RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
-- 
1.7.10.4


> Annie's setup has both the guests running on the same host, in which case
> packets are looped back.
> 
> -Anirban
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.