[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] xen-netback: coalesce slots before copying



> >
> > This patch tries to coalesce tx requests when constructing grant copy
> > structures. It enables netback to deal with situation when frontend's
> > MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
> >
> > It defines max_skb_slots, which is a estimation of the maximum number of
> > slots
> > a guest can send, anything bigger than that is considered malicious. Now it
> is
> > set to 20, which should be enough to accommodate Linux (16 to 19) and
> > possibly
> > Windows (19?).
> >
> > +/*
> > + * This is an estimation of the maximum possible frags a SKB might
> > + * have, anything larger than this is considered malicious. Typically
> > + * Linux has 16 to 19, Windows has 19(?).
> > + */
> 
> Could you remove the "Windows has 19(?)" comment? I don't think it's
> helpful, even with the "(?)"... I just checked and windows 2008R2 gives
> GPLPV a maximum of 20 buffers in all the testing I've done, and that's after
> the header is coalesced so it's probably more than that. I'm pretty sure I
> tested windows 2003 quite a while back and I could coax it into giving
> ridiculous numbers of buffers when using iperf with tiny buffers.
> 
> Maybe "Windows has >19" if you need to put a number on it?
> 

Actually it turns out GPLPV just stops counting at 20. If I keep counting I can 
sometimes see over 1000 buffers per GSO packet under Windows using "iperf 
-l50", so windows will quite happily send 1000's of buffers and I don't have 
any evidence that it wouldn't cope with a similar number on receive. fwiw.

(of course coalescing vs using 1000 ring slots is an obvious choice...)

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.