[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] xen-netback: coalesce slots before copying

On Thu, Mar 21, 2013 at 10:14:17PM +0000, James Harper wrote:
> > 
> >> Actually it turns out GPLPV just stops counting at 20. If I keep
> >> counting I can sometimes see over 1000 buffers per GSO packet under
> >> Windows using "iperf -
> > 
> > Do you think it is necessary to increase MAX_SKB_SLOTS_DEFAULT to 21?
> > 
> Doesn't really matter. Under windows you have to coalesce anyway and the 
> number of cases where the skb count is 20 or 21 is very small so there will 
> be negligible gain and it will break guests that can't handle more than 19.

It's not about performance, it's about usability. If frontend uses more
slots than backend allows it to, it gets disconnected. In case we don't
push the wrong value upstream, it is important to know whether 20 is
enough for Windows PV driver.

> Has anyone done the benchmarks on if memcpy to coalesce is better or worse 
> than consuming additional ring slots? Probably OT here but I'm talking about 
> packets that might have 19 buffers but could fit on a page or two of 
> coalesced.

After this changeset number of grant copy operations is greater or equal
to number of slots. I run iperf as my functional test, I also notice
the result is within the same range before this change.

And a future improvement would be using compound page for backend, which
can make number of grant copy ops more or less equal to number of slots


> James

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.