[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support for multiple queues
On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote: > From: "Andrew J. Bennieston" <andrew.bennieston@xxxxxxxxxx> > > Build on the refactoring of the previous patch to implement multiple > queues between xen-netfront and xen-netback. > > Check XenStore for multi-queue support, and set up the rings and event > channels accordingly. > > Write ring references and event channels to XenStore in a queue > hierarchy if appropriate, or flat when using only one queue. > > Update the xennet_select_queue() function to choose the queue on which > to transmit a packet based on the skb hash result. > > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@xxxxxxxxxx> > --- > drivers/net/xen-netfront.c | 178 > ++++++++++++++++++++++++++++++++++---------- > 1 file changed, 140 insertions(+), 38 deletions(-) > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c > index 4f5a431..470d6ed 100644 > --- a/drivers/net/xen-netfront.c > +++ b/drivers/net/xen-netfront.c > @@ -57,6 +57,12 @@ > #include <xen/interface/memory.h> > #include <xen/interface/grant_table.h> > > +/* Module parameters */ > +unsigned int xennet_max_queues; > +module_param(xennet_max_queues, uint, 0644); > +MODULE_PARM_DESC(xennet_max_queues, > + "Maximum number of queues per virtual interface"); > + Maybe I'm nit-picking here. But I think exposing xennet_max_queues as sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look very good to me -- userspace tools would need to query different knobs in frontend and backend. I think it makes sense to use a unified name in both frontend and backend. You can either use xenvif_max_queues as backend does or even just max_queues for both frontend and backend. Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |