[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/6] xen-netfront: reduce gso_max_size to account for ethernet header



On 25/03/13 18:39, David Miller wrote:
From: Wei Liu <wei.liu2@xxxxxxxxxx>
Date: Mon, 25 Mar 2013 18:32:06 +0000

On Mon, Mar 25, 2013 at 04:18:09PM +0000, David Miller wrote:
From: Wei Liu <wei.liu2@xxxxxxxxxx>
Date: Mon, 25 Mar 2013 11:08:18 +0000

The maximum packet including ethernet header that can be handled by netfront /
netback wire format is 65535. Reduce gso_max_size accordingly.

Drop skb and print warning when skb->len > 65535. This can 1) save the effort
to send malformed packet to netback, 2) help spotting misconfiguration of
netfront in the future.

Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
This is effectively the default already, you don't need to change this
value explicitly.

->gso_max_size is set by default to 65536 and then TCP performs this
calculation:

                 xmit_size_goal = ((sk->sk_gso_max_size - 1) -
                                   inet_csk(sk)->icsk_af_ops->net_header_len -
                                   inet_csk(sk)->icsk_ext_hdr_len -
                                   tp->tcp_header_len);
OK. But I see similar fix for a physical nic (commit b7e5887e0e414b), am
I missing something here?

And the symptom is that if I don't reserve headroom I see skb->len =
65538. Can you shed some light on this?

I think the problem is that the netback wire protocol includes the ethernet header.

The xmit_size_goal is taking into account the IP header, IP options and TCP header but not the ethernet header itself.

The Emulex driver seems to have similar problem with maximum size of packet with ethernet header,
http://patchwork.ozlabs.org/patch/164818/

I think we should subtract VLAN_ETH_HLEN so that we can trunk VLAN through netback safely


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.