[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.9 Development Update



On Thu, Jan 26, 2017 at 12:27:15PM +0000, Julien Grall wrote:
> Hi,
> 
> On 09/12/2016 19:09, Andrew Cooper wrote:
> > On 09/12/16 19:01, Stefano Stabellini wrote:
> > > On Fri, 9 Dec 2016, Oleksandr Andrushchenko wrote:
> > > > On 12/09/2016 03:57 PM, Pasi Kärkkäinen wrote:
> > > > > On Fri, Dec 09, 2016 at 02:57:04PM +0200, Oleksandr Andrushchenko 
> > > > > wrote:
> > > > > > > > Should we have a section on new PV drivers? If so, I suggest to 
> > > > > > > > add:
> > > > > > > > - Xen transport for 9pfs
> > > > > > > > - PV Calls
> > > > > > > Good idea. We could also include DRM and PV Sound (CC Oleksandr).
> > > > > > > 
> > > > > > This is a great idea. Let me explain what we have and what the 
> > > > > > direction
> > > > > > is:
> > > > > > 1. Frontends which we already have, working, but need to 
> > > > > > refactor/cleanup:
> > > > > > 1.1. PV sound
> > > > > > 1.2. PV DRM
> > > > > > 1.3. DISPL protocol, I will push v1 for review right after sndif 
> > > > > > done
> > > > > > 1.3. PV DRM mapper (Dom0 generic DRM driver to implement DRM zero 
> > > > > > copy
> > > > > > via DRM Prime buffer sharing)
> > > > > > 1.4. PV events not done, but we are considering [1]. If it fits and
> > > > > > is maintained,
> > > > > > then we'll probably stick to it, otherwise new PV will be created
> > > > > > 
> > > > > > 2. Backends, for the above frontends already implemented:
> > > > > > 2.1. A unified library for Xen backends (libxenbe)
> > > > > > 2.2. DRM + Wayland
> > > > > > 2.3. ALSA
> > > > > > 2.4. Events not implemented yet
> > > > > > 
> > > > > > All the above sources are available on *public* Github repos
> > > > > > (I can provide links on request) and the intention is to
> > > > > > upstream.
> > > > > > 
> > > > > Please do post the links..
> > > > Please note these are all WIP:
> > > > 1. Frontends
> > > > https://github.com/andr2000?tab=repositories
> > > > 2. Backends
> > > > https://github.com/al1img?tab=repositories
> > > Now, I don't want to sound pessimistic, but I thought I was being
> > > audacious when I wrote both PV Calls and 9pfs for 4.9 - do you really
> > > think it is feasable to complete upstreaming of PV sound, PV DRM, DISPL,
> > > PV DRM frontends and backends, all by April? I would probably reduce the
> > > list a bit.
> > 
> > I think it would be good to main two lists.  One of "the stuff people
> > are working on overall", and "the subset of it intended/expected for the
> > forthcoming release".
> > 
> > Stuff will invariably slip, but even if the work isn't intended for the
> > forthcoming release, it it still useful to see if anyone in the
> > community is working on a related topic.
> 
> I am thinking to re-introduce the state of a series (none, fair, ok, good,
> done) as it was done in the past [1].
> 
> "The states are: none -> fair -> ok -> good -> done
> 
> none - nothing yet
> fair - still working on it, patches are prototypes or RFC
> ok   - patches posted, acting on review
> good - some last minute pieces
> done - all done, might have bugs"
> 

We (I?) ditched them because they weren't that useful.  "None" is just
signalling of intent, which doesn't carry enough of meaningful
information.  "Fair", "ok" and "good" are all subjective -- we've seen
"good" series slipped due to last minute issues.

If we really want status, I would suggest more concrete status:

  design / design.vX -> rfc / rfc.vX -> wip / wip.vX -> committed

These are more tangible and objective.

> Also, rather than having two separate lists as you suggested, I would
> combine the two and differentiate the items planned for next release with a
> tag (let's say [next]).
> 

No opinion on this.

Wei.

> Any opinions?
> 
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2015-02/msg01816.html
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.