[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen-netfront: drop skb when skb->len > 65535




On 2013-3-2 21:32, Wei Liu wrote:
On Sat, Mar 02, 2013 at 02:54:17AM +0000, Ian Campbell wrote:
On Fri, 2013-03-01 at 17:00 +0000, Wei Liu wrote:
On Fri, 2013-03-01 at 16:48 +0000, Jan Beulich wrote:
On 01.03.13 at 17:31, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
The `size' field of Xen network wired format is uint16_t, anything bigger
than
65535 will cause overflow.

The punishment introduced by XSA-39 is quite harsh - DomU is disconnected when
it's discovered to be sending corrupted skbs. However, it looks like Linux
kernel will generate some bad skbs sometimes, so drop those skbs before
sending to over netback to avoid being disconnected.
While fixing the frontend is certainly desirable, we can't expect
everyone to deploy fixed netfronts in all their VMs - some OS
versions used in there may even be out of service. So we
ought to find a way to also more gracefully deal with the
situation in netback, without re-opening the security issue
that prompted those changes.

Regarding the punishment bit, I think its worth discussing it a bit.
Yes, the trick is figuring out what to do without reintroducing the
softlockup which XSA-39 fixed.

Perhaps we should allow silently consume (and drop) oversize skbs and
only shutdown the rings if they also consume too many (FSVO too many)
slots?

But the bug is always there, it drew no attention until revealed by
XSA-39. It ought to be fixed anyway. :-)
I would have sworn that skb->len was also limited to 64k, but looking at
the header I see it is actually an int and the only limit of that sort
is related to MAX_SKB_FRAGS (which doesn't actually limit the total
size).
I had the impression that skb->len was limited to 64k, too. But it
turned out I was wrong.

OOI how big were the skbs you were seeing?
As Nick (npegg@xxxxxxxxxx) pointed out in his email, he saw size 65538.
I can reproduce this as well by setting vif's mtu to 100 then run iperf.
100 was just a random number I came up with when I played with
fragmentation.

Did you notice any GSO packets during your iperf test?

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.