[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread model



On Mon, 2012-01-16 at 10:14 +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-
> > bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Wei Liu
> > Sent: 13 January 2012 16:59
> > To: Ian Campbell; konrad.wilk@xxxxxxxxxx; xen-
> > devel@xxxxxxxxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx
> > Cc: Wei Liu (Intern)
> > Subject: [Xen-devel] [RFC PATCH 3/6] netback: switch to NAPI + kthread
> > model
> > 
> > This patch implements 1:1 model netback. We utilizes NAPI and kthread to
> > do the weight-lifting job:
> > 
> >   - NAPI is used for guest side TX (host side RX)
> >   - kthread is used for guest side RX (host side TX)
> > 
> > This model provides better scheduling fairness among vifs. It also lays the
> > foundation for future work.
> > 
> > The major defect for the current implementation is that in the NAPI poll
> > handler we don't actually disable interrupt. Xen stuff is different from 
> > real
> > hardware, it requires some other tuning of ring macros.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> > ---
> [snip]
> > 
> >     struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
> >     struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS]; @@ -100,42
> > +91,14 @@ struct xen_netbk {
> >     struct netbk_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];  };
> > 
> 
> Keeping these big inline arrays might cause scalability issues.
> pending_tx_info should arguably me more closely tied in and possibly
> implemented within your page pool code.

For pending_tx_info that probably makes sense since there is a 1:1
mapping between page pool entries and pending_tx_info.

For some of the others the arrays are the runtime scratch space used by
the netback during each processing pass. Since, regardless of the number
of VIFs, there can only ever be nr_online_cpus netback's active at once
perhaps per-CPU scratch space (with appropriate locking etc) is the way
to go.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.