[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2 Release Plan / TODO

Someone pointed out that some people may not know that their name was on
this list, so I've CC'd everyone who is mentioned by name. Please let me
know if there is a problem with your entry.

On Mon, 2012-03-19 at 10:57 +0000, Ian Campbell wrote:
> So as discussed last week we now have a plan for a 4.2 release:
> http://lists.xen.org/archives/html/xen-devel/2012-03/msg00793.html
> The time line is as follows:
> 19 March      -- TODO list locked down        << WE ARE HERE
> 2 April               -- Feature Freeze
> Mid/Late April        -- First release candidate
> Weekly                -- RCN+1 until it is ready
> The updated TODO list follows. Per the release plan a strong case will
> now have to be made as to why new items should be added to the list,
> especially as a blocker, rather than deferred to 4.3.
> Things look pretty good on the hypervisor side of things.
> The tools list is quite long but most stuff is known to be in progress
> and in many cases preliminary patches are available. I think there is a
> name next to everything.
> I have gather various items under a top level "xl compatibility with xm"
> heading under both blocker and nice-to-have. I expect this is the area
> where most bugs will be reported once we hit -rc and users start testing
> this stuff in anger.
> Ian.
> hypervisor, blockers:
>       * NONE?
> tools, blockers:
>       * libxl stable API -- we would like 4.2 to define a stable API
>         which downstream's can start to rely on not changing. Aspects of
>         this are:
>               * Safe fork vs. fd handling hooks. Involves API changes
>                 (Ian J, patches posted)
>               * locking/serialization, e.g., for domain creation. As of
>                 now,  nothing is provided for this purpose, and
>                 downstream toolstacks have to put their own mechanisms
>                 in place. E.g., xl uses a fcntl() lock around the whole
>                 process of domain creation. It should OTOH be libxl job
>                 to offer a proper set of hooks --placed at the proper
>                 spots during the domain creation process-- for the
>                 downstreams to  fill with the proper callbacks. (Dario
>                 Faggioli)
>               * agree & document compatibility guarantees + associated
>                 technical measures (Ian C)
>       * xl compatibility with xm:
>               * feature parity wrt driver domain support (George Dunlap)
>               * xl support for "rtc_timeoffset" and "localtime" (Lin
>                 Ming, Patches posted)
>       * More formally deprecate xm/xend. Manpage patches already in
>         tree. Needs release noting and communication around -rc1 to
>         remind people to test xl.
>       * Domain 0 block attach & general hotplug when using qdisk backend
>         (need to start qemu as necessary etc) (Stefano S)
>       * file:// backend performance. qemu-xen-tradition's qdisk is quite
>         slow & blktap2 not available in upstream kernels. Need to
>         consider our options:
>               * qemu-xen's qdisk is thought to be well performing but
>                 qemu-xen is not yet the default. Complexity arising from
>                 splitting qemu-for-qdisk out from qemu-for-dm and
>                 running N qemu's.
>               * potentially fully userspace blktap could be ready for
>                 4.2
>               * use /dev/loop+blkback. This requires loop driver AIO and
>                 O_DIRECT patches which are not (AFAIK) yet upstream.
>               * Leverage XCP's blktap2 DKMS work.
>               * Other ideas?
>       * Improved Hotplug script support (Roger Pau MonnÃ, patches
>         posted)
>       * Block script support -- follows on from hotplug script (Roger
>         Pau MonnÃ)
> hypervisor, nice to have:
>       * solid implementation of sharing/paging/mem-events (using work
>         queues) (Tim Deegan, Olaf Herring et al -- patches posted)
>               * "The last patch to use a waitqueue in
>                 __get_gfn_type_access() from Tim works.  However, there
>                 are a few users who call __get_gfn_type_access with the
>                 domain_lock held. This part needs to be addressed in
>                 some way."
>       * Sharing support for AMD (Tim, Andres).
>       * PoD performance improvements (George Dunlap)
> tools, nice to have:
>       * Configure/control paging via xl/libxl (Olaf Herring, lots of
>         discussion around interface, general consensus reached on what
>         it should look like)
>       * Upstream qemu feature patches:
>               * Upstream qemu PCI passthrough support (Anthony Perard,
>                 patches sent)
>               * Upstream qemu save restore (Anthony Perard, Stefano
>                 Stabellini, patches sent, waiting for upstream ack)
>       * Nested-virtualisation. Currently "experimental". Likely to
>         release that way.
>               * Nested SVM. Tested in a variety of configurations but
>                 still some issues with the most important use case (w7
>                 XP mode) [0]  (Christoph Egger)
>               * Nested VMX. Needs nested EPT to be genuinely useful.
>                 Need more data on testedness etc (Intel)
>       * Initial xl support for Remus (memory checkpoint, blackholing)
>         (Shriram, patches posted, blocked behind qemu save restore
>         patches)
>       * xl compatibility with xm:
>               * xl support for autospawning vncviewer (vncviewer=1 or
>                 otherwise) (Goncalo Gomes)
>               * support for vif "rate" parameter (Mathieu GagnÃ)
> [0] http://lists.xen.org/archives/html/xen-devel/2012-03/msg00883.html
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.