[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/4] xen-netback: coalesce slots before copying
On Fri, Mar 22, 2013 at 11:19:56AM +0000, James Harper wrote: > > > > On Thu, Mar 21, 2013 at 10:14:17PM +0000, James Harper wrote: > > > > > > > >> Actually it turns out GPLPV just stops counting at 20. If I keep > > > >> counting I can sometimes see over 1000 buffers per GSO packet under > > > >> Windows using "iperf - > > > > > > > > Do you think it is necessary to increase MAX_SKB_SLOTS_DEFAULT to 21? > > > > > > > > > > Doesn't really matter. Under windows you have to coalesce anyway and > > the number of cases where the skb count is 20 or 21 is very small so there > > will > > be negligible gain and it will break guests that can't handle more than 19. > > > > It's not about performance, it's about usability. If frontend uses more > > slots than backend allows it to, it gets disconnected. In case we don't > > push the wrong value upstream, it is important to know whether 20 is > > enough for Windows PV driver. > > > > Windows will accept whatever you throw at it (there may be some upper limit, > but I suspect it's quite high). Whatever Linux will accept, it will be less > than the 1000+ buffers that Windows can generate, so some degree of > coalescing will be required for Windows->Linux. > > In GPLPV I already coalesce anything with more than 19 buffers, because I > have no guarantee that Dom0 will accept anything more (and who knows what > Solaris or BSD will accept, if those are still valid backends...), so > whatever you increase Dom0 to won't matter because I would still need to > assume that Linux can't accept more than 19, until such time as Dom0 (or > driver domain) advertises the maximum buffer count it can support in > xenstore... > > So do what you need to do to make Linux work, just don't put the erroneous > comment that "windows has a maximum of 20 buffers" or whatever it was in the > comments :) > OK, problem solved. :-) Wei. > James > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |