[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions



On Tue, Apr 30, 2013 at 03:04:48PM +0100, Jan Beulich wrote:
> >>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> > --- a/include/xen/interface/io/netif.h
> > +++ b/include/xen/interface/io/netif.h
> > @@ -13,6 +13,24 @@
> >  #include <xen/interface/grant_table.h>
> >  
> >  /*
> > + * Older implementation of Xen network frontend / backend has an
> > + * implicit dependency on the MAX_SKB_FRAGS as the maximum number of
> > + * ring slots a skb can use. Netfront / netback may not work as
> > + * expected when frontend and backend have different MAX_SKB_FRAGS.
> > + *
> > + * A better approach is to add mechanism for netfront / netback to
> > + * negotiate this value. However we cannot fix all possible
> > + * frontends, so we need to define a value which states the minimum
> > + * slots backend must support.
> > + *
> > + * The minimum value derives from older Linux kernel's MAX_SKB_FRAGS
> > + * (18), which is proved to work with most frontends. Any new backend
> > + * which doesn't negotiate with frontend should expect frontend to
> > + * send a valid packet using slots up to this value.
> > + */
> > +#define XEN_NETIF_NR_SLOTS_MIN 18
> > +
> > +/*
> >   * Notifications after enqueuing any type of message should be conditional 
> > on
> >   * the appropriate req_event or rsp_event field in the shared ring.
> >   * If the client sends notification for rx requests then it should specify
> 
> Just like with the other public header change in this series - care
> to submit a patch against xen-unstable, to have the master copy
> of the header updated?
> 

Re all the header changes, I will post separate patch for Xen.


Wei.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.