[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.4 development update



George,
I guess we need to start looking at drafting a press release and getting Advisory Board approval to get the funds. Assuming there are no major delays, targeting the week before FOSDEM (the week of Jan 23) would be ideal for a release
Lars

On 26/11/2013 12:14, George Dunlap wrote:
This information will be mirrored on the Xen 4.4 Roadmap wiki page:
  http://wiki.xen.org/wiki/Xen_Roadmap/4.4

We're one week into the "code freezing point", which means that every
patch that introduces new functionality needs a freeze exception.

Remember our goal for the release:
  1. A bug-free release
  2. An awesome release
  3. An on-time release

Accepting a new feature may make Xen more awesome; but it also
introduces a risk that it will introduce more bugs.  That bug may be
found before the release (threatening #3), or it may not be found
until after the release (threatening #1).

So when posting a patch requesting a freeze exception, please
consider the following questions:

  1. What is the benefit of this feature?  Why should we accept this
  now instead of waiting for 4.5?

  2. What is the risk that this may introduce bugs that may slip the
  releass, or bugs that may not be discovered and end up in the
  release?

The more skeptical you are in your evaluation, the more generous I can
afford to be.

We will become progressively more conservative until the first RC,
which is scheduled for 2 weeks' time (6 December).  After that, we
will only accept bug fixes.

Bug fixes can be checked in without a freeze exception throughout the
code freeze, unless the maintianer thinks they are particularly high
risk.  In later RC's, we may even begin rejecting bug fixes if the
broken functionality is small and the risk to other functionality is
high.

Features which are currently marked "experimental" or do not at the
moment work at all cannot be broken really; so changes to code only
used by those features should be able to get a freeze exception
easily.  (Tianocore is something which would probably fall under
this.)

Features which change or add new interfaces which will need to be
supported in a backwards-compatible way (for instance, vNUMA) will
need freeze exceptions to make sure that the interface itself has
enough time to be considered stable.

These are guidelines and principles to give you an idea where we're
coming from; if you think there's a good reason why making an
exception for you will help us achieve goals 1-3 above better than not
doing so, feel free to make your case.

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013
* Code freezing point: 18 November 2013 <== WE ARE HERE
* First RC: 6 December 2013
* Release: 21 January 2014

Last updated: 25 November 2013

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
  - Added Spice vdagent support
  - Added Spice clipboard sharing support

* PHV domU (experimental only)

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* pvgrub2 checked into grub upstream

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

== Resolved since last update ==

* credit scheduler doesn't update other fields when tslice updated from sysctl

== Open ==

* qemu-upstream not freeing pirq
  > http://www.gossamer-threads.com/lists/xen/devel/281498
  status: patches posted; latest patches need testing

* Race in PV shutdown between tool detection and shutdown watch
  > http://www.gossamer-threads.com/lists/xen/devel/282467
  > Nothing to do with ACPI
  status: Patches posted

* Supposed regression from a3513737 ("x86: allow guest to set/clear
  > MSI-X mask bit (try 2)"), as per
  > http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg01589.html.

* qemu-traditional mis-parses host bus 8 as 0
  > http://bugs.xenproject.org/xen/bug/15

* xen_platform_pci=0 doesn't work with qemu-xen
  > http://bugs.xenproject.org/xen/bug/20
  status: Patches posted

* xl does not support specifying virtual function for passthrough device
  > http://bugs.xenproject.org/xen/bug/22

* xl does not handle migrate interruption gracefully
   > If you start a localhost migrate, and press "Ctrl-C" in the middle,
   > you get two hung domains

* libxl / xl does not handle failure of remote qemu gracefully
   > Easiest way to reproduce:
   >  - set "vncunused=0" and do a local migrate
   >  - The "remote" qemu will fail because the vnc port is in use
   > The failure isn't the problem, but everything being stuck afterwards is

* HPET interrupt stack overflow (when using hpet_broadcast mode and MSI
capable HPETs)
   owner: andyh@citrix
   status: patches posted, undergoing review iteration.

* xl needs to disallow PoD with PCI passthrough
   >see 
http://xen.1045712.n5.nabble.com/PATCH-VT-d-Dis-allow-PCI-device-assignment-if-PoD-is-enabled-td2547788.html

* PCI hole resize support hvmloader/qemu-traditional/qemu-upstream
with PCI/GPU passthrough
   > http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
   > Where Stefano writes:
   > 2) for Xen 4.4 rework the two patches above and improve
   > i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
   > enough, it also needs to be able to resize the system memory region
   > (xen.ram) to make room for the bigger pci_hole

* qemu memory leak?
   > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

== Backlog ==

=== Testing coverage ===

* new libxl w/ previous versions of xl
  @IanJ

* Host S3 suspend
  @bguthro, @dariof

* Default [example] XSM policy
  @Stefano to ask Daniel D

* Storage driver domains
  @roger

* HVM pci passthrough
  @anthony

* PV pci passthrough
  @konrad (or @george if he gets to it first)

* Network driver domains
  @George

* Nested virt?
  @intel (chased by George)

* Fix SRIOV test (chase intel)
  @ianj

* Fix bisector to e-mail blame-worthy parties
  @ianj

* Fix xl shutdown
   @ianj

* stub domains
   @athony

* performance benchmarks
   @dario

=== Meta-items (composed of other items) ===

* Meta: PVIO NUMA improvements
  - soft affinity for vcpus (4.4 possible)
  - PV guest NUMA interface (4.4 possible)
  - Sensible dom0 NUMA layout
  - Toolstack pinning backend thread / virq to appropraite d0 vcpu
  - NUMA-aware ballooning

* xend still in tree (x)
  - xl list -l on a dom0-only system
  - xl list -l doesn't contain tty console port
  - xl Alternate transport support for migration*
  - xl PVSCSI support
  - xl PVUSB support

=== Big ticket items ===

* PVH mode (w/ Linux)
   owner: mukesh@oracle, george@citrix
   status (Linux): Acked, waiting for ABI to be nailed down
   status (Xen): Initial version checked in, still nailing down interface

* Update to qemu 1.6
   owner: Anthonyper
   status: In staging, still working out a bug in the VMX code

* ARM Live Migration Support
   owner: Jaeyong Yoo <jaeyong.yoo@xxxxxxxxxxx>
   status: v5 posted, looking good for code freeze

* ARM64 guest
   owner: IanC
   status: v3 posted, v4 in progress. looking good.

* soft affinity for vcpus (was NUMA affinity for vcpus)
     owner: Dario
     status: v2 posted

* PV guest NUMA interface
     owner: Elena
     status: v3 posted

* libvirt/libxl integration (external)
  - owner: jfehlig@suse, dario@citrix
  - patches posted (should be released before 4.4)
   - migration
   - PCI pass-through
  - In progress
   - integration w/ libvirt's lock manager
   - improved concurrency

* libxl: Spice usbredirection support for upstream qemu
  owner: fabio@M2R
  status: I'll post new patch version shortly

* libxl: usb2 and usb3 controller support for upstream qemu
  owner: fabio@M2R
  status: patch v5 posted, tested and working, awaiting reviews

* libxl network buffering support for Remus
    @shriram
    status: patches posted
    prognosis: fair

* xencrashd
    owner: don@verizon
    status: v2 posted
   > http://lists.xen.org/archives/html/xen-devel/2013-11/msg02569.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.