[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next] xen-netfront: try linearizing SKB if it occupies too many slots



On Fri, May 16, 2014 at 06:04:34AM -0700, Eric Dumazet wrote:
> On Fri, 2014-05-16 at 12:08 +0100, Wei Liu wrote:
> > Some workload, such as Redis can generate SKBs which make use of
> > compound pages. Netfront doesn't quite like that because it doesn't want
> > to send packet that occupies exessive slots to the backend as backend
> > might deem it malicious. On the flip side these packets are actually
> > legit, the size check at the beginning of xennet_start_xmit ensures that
> > packet size is below 64K.
> > 
> > So we linearize SKB if it occupies too many slots. If the linearization
> > fails then the SKB is dropped.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> > Cc: David Vrabel <david.vrabel@xxxxxxxxxx>
> > Cc: Konrad Wilk <konrad.wilk@xxxxxxxxxx>
> > Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> > Cc: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
> > Cc: Zoltan Kiss <zoltan.kiss@xxxxxxxxxx>
> > ---
> >  drivers/net/xen-netfront.c |   17 ++++++++++++++---
> >  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> This is likely to fail on typical host.
> 

It's not that common to trigger this, I only saw a few reports. In fact
Stefan's report is the first one that comes with a method to reproduce
it.

I tested with redis-benchmark on a guest with 256MB RAM and only saw a
few "failed to linearize", never saw a single one with 1GB guest.

> What about adding a smart helper trying to aggregate consecutive
> smallest fragments into a single frag ?
> 

Ideally this is a better apporach, but I'm afraid I won't be able to
look into this until early / mid June.

Wei.

> This would be needed for bnx2x for example as well.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.