[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] public/io/netif.h: change semantics of "request-multicast-control" flag
> -----Original Message----- > From: Ian Campbell [mailto:ian.campbell@xxxxxxxxxx] > Sent: 21 January 2016 11:59 > To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx > Cc: Ian Jackson; Keir (Xen.org); Jan Beulich; Tim (Xen.org) > Subject: Re: [Xen-devel] [PATCH] public/io/netif.h: change semantics of > "request-multicast-control" flag > > On Thu, 2016-01-21 at 11:48 +0000, Paul Durrant wrote: > > > -----Original Message----- > > > From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel- > > > bounces@xxxxxxxxxxxxx] On Behalf Of Paul Durrant > > > Sent: 20 January 2016 13:14 > > > To: Ian Campbell; xen-devel@xxxxxxxxxxxxxxxxxxxx > > > Cc: Ian Jackson; Keir (Xen.org); Jan Beulich; Tim (Xen.org) > > > Subject: Re: [Xen-devel] [PATCH] public/io/netif.h: change semantics of > > > "request-multicast-control" flag > > > > > > > -----Original Message----- > > > > From: Ian Campbell [mailto:ian.campbell@xxxxxxxxxx] > > > > Sent: 20 January 2016 13:06 > > > > To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx > > > > Cc: Ian Jackson; Jan Beulich; Keir (Xen.org); Tim (Xen.org) > > > > Subject: Re: [PATCH] public/io/netif.h: change semantics of "request- > > > > multicast-control" flag > > > > > > > > On Wed, 2016-01-20 at 12:50 +0000, Paul Durrant wrote: > > > > > My patch b2700877 "move and amend multicast control > documentation" > > > > > clarified use of the multicast control protocol between frontend > > > > > and > > > > > backend. However, it transpires that the restrictions that > > > > > documentation > > > > > placed on the "request-multicast-control" flag make it hard for a > > > > > frontend to enable 'all multicast' promiscuous mode, in that to do > > > > > so > > > > > would require the frontend and backend to disconnect and re- > > > > > connect. > > > > > > > > Do we therefore think that this document reflected reality, i.e. > > > > might this > > > > not be "just" a documentation bug? > > > > > > > > (Or maybe we can't tell because the only previous implementation was > > > years > > > > ago in Solaris or something) > > > > > > That's my concern. I hope it's just a documentation bug, but I don't > > > know. > > > Also I've already done an implementation in Linux netback according to > > > the > > > restricted semantics. > > > > > > > > > > > > This patch adds a new "feature-dynamic-multicast-control" flag to > > > > > allow > > > > > a backend to advertise that it will watch "request-multicast- > > > > > control" > > > hence > > > > > allowing it to be meaningfully modified by the frontend at any time > > > > > rather > > > > > than only when the frontend and backend are disconnected. > > > > > > > > Would allowing XEN_NETIF_EXTRA_TYPE_MCAST_{ADD,DEL} to take a > > > bcast > > > > address > > > > be easier on the backend, in that it would just need to be a static > feature > > > > rather than watching stuff on the fly? > > > > > > The documented semantics of the list are 'exact match' so sending a bcast > > > address doesn't do much good with a backend that doesn't know to treat > is > > > specially hence a frontend can't tell whether 'all multicast' mode is > > > going > to > > > work without the extra feature flag. As for watching "request-multcast- > > > control" vs. add/remove of bcast, the complexity of implementation is > > > cheaper for the latter but I think the former is 'nicer'. > > > > > > > Are you ok with the xenstore watch approach (and leavingÂthe patch as is) > > or would you prefer to spec. the bcast address as a wildcard and submit a > > new patch? > > I'm fine with the watch approach, was just suggesting the alternative in > case it turned out to be much easier. > I already have an implementation of the watch approach which is now allowing Windows logo testing to pass :-) Paul > Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |