[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen 4.4 development update: Feature freeze has started



This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

It's been over a month since the last mail, so I expect there are a
number of updates -- please take a look through.

I've tried to put the "big-ticket" items I think close to being done
before the code freeze near the top.  I've moved "clean-ups" and
big-ticket items which have missed the feature freeze lower, to be a
bit out of the way.  This is not meant to be a declaratory judgement
-- if you think your feature is in the wrong place (for example,
because you in fact posted a series before the feature freeze on 18
October), then please let me know.

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013 <== WE ARE HERE
* Code freezing point: 18 November 2013 (Note date change)
* First RC: 6 December 2013
* Release: 21 January 2014

Last updated: 11 November 2016

== Completed  ==

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support

* Event channel scalability (FIFO event channels)

* Update to SeaBIOS 1.7.3.1

* pvgrub2 checked into grub upstream

* Guest EFI booting (tianocore)

== Open ==

* qemu-upstream not freeing pirq
 > http://www.gossamer-threads.com/lists/xen/devel/281498
 status: patches posted, tested-and-acked

* Race in PV shutdown between tool detection and shutdown watch
 > http://www.gossamer-threads.com/lists/xen/devel/282467
 > Nothing to do with ACPI
 status: Patches posted

* credit scheduler doesn't update other fields when tslice updated from sysctl
 > Reported by Luwei Cheng <lwcheng@xxxxxxxxx>
 > http://bugs.xenproject.org/xen/bug/16

* Supposed regression from a3513737 ("x86: allow guest to set/clear
 > MSI-X mask bit (try 2)"), as per
 > http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg01589.html.

* qemu-traditional mis-parses host bus 8 as 0
 > http://bugs.xenproject.org/xen/bug/15

* xen_platform_pci=0 doesn't work with qemu-xen
 > http://bugs.xenproject.org/xen/bug/20

* xl does not support specifying virtual function for passthrough device
 > http://bugs.xenproject.org/xen/bug/22

* xl does not handle migrate interruption gracefully
  > If you start a localhost migrate, and press "Ctrl-C" in the middle,
  > you get two hung domains

* libxl / xl does not handle failure of remote qemu gracefully
  > Easiest way to reproduce:
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is

== Backlog ==

=== Testing coverage ===

* new libxl w/ previous versions of xl
 @IanJ

* Host S3 suspend
 @bguthro, @dariof

* Default [example] XSM policy
 @Stefano to ask Daniel D

* Xen on ARM
 # hardware
  @ianc
   emulator: @stefano to think about it

* Storage driver domains
 @roger

* HVM pci passthrough
 @anthony

* PV pci passthrough
 @konrad (or @george if he gets to it first)

* Network driver domains
 @George

* Nested virt?
 @intel (chased by George)

* Fix SRIOV test (chase intel)
 @ianj

* Fix bisector to e-mail blame-worthy parties
 @ianj

* Fix xl shutdown
  @ianj

* stub domains
  @athony

* performance benchmarks
  @dario

=== Big ticket items ===

* Update to qemu 1.6
  owner: Anthonyper
  status: In staging, still working out a bug in the VMX code

* Rationalized backend scripts
  owner: roger@citrix
  status: patches posted

* Scripts for driver domains (depends on backend scripts)
  owner: roger@citrix
  status: Patches posted (?)
  prognosis: good

* PVH mode (w/ Linux)
  owner: mukesh@oracle, george@citrix
  status (Linux): Acked, waiting for ABI to be nailed down
  status (Xen): v15 posted

* ARM stuff: ??

* Meta: PVIO NUMA improvements
 - NUMA affinity for vcpus
    owner: Dario
    status: Patches posted
 - PV guest NUMA interface
    owner: Elena
    status: v2 posted
 - Sensible dom0 NUMA layout
 - Toolstack pinning backend thread / virq to appropraite d0 vcpu
 - NUMA-aware ballooning
    owner: Li Yechen
    status: in progress

* libvirt/libxl integration (external)
 - owner: jfehlig@suse, dario@citrix
 - patches posted (should be released before 4.4)
  - migration
  - PCI pass-through
 - In progress
  - integration w/ libvirt's lock manager
  - improved concurrency

* xend still in tree (x)
 - xl list -l on a dom0-only system
 - xl list -l doesn't contain tty console port
 - xl Alternate transport support for migration
 - xl PVSCSI support
 - xl PVUSB support

* NUMA Memory migration
  owner: dario@citrix
  status: In progress

* xl USB pass-through for HVM guests using Qemu USB emulation
  prognosis: Good if extended
  owner: George
  status: v6 patch series posted

* libxl: Spice usbredirection support for upstream qemu
 owner: fabio@M2R
 status: I'll post new patch version shortly

* libxl: usb2 and usb3 controller support for upstream qemu
 owner: fabio@M2R
 status: patch v5 posted, tested and working, awaiting reviews

== Missed the feature freeze ==

* xl migrate transport improvements
 owner: None
 > See discussion here: http://bugs.xenproject.org/xen/bug/19
 - Option to connect over a plain TCP socket rather than ssh
 - xl-migrate-recieve suitable for running in inetd
 - option for above to redirect log output somewhere useful
 - Documentation for setting up alternate transports

* HVM guest NUMA
  owner: Matt Wilson@amazon
  status: in progress (?)

* qemu-upstream stubdom, Linux
   owner: anthony@citrix
   status: in progress
   prognosis: ?
   qemu-upstream needs a more fully-featured libc than exists in
   mini-os.  Either work on a minimalist linux-based stubdom with
   glibc, or port one of the BSD libcs to minios.

* qemu-upstream stubdom, BSD libc
  prognosis: ?
  owner: ianj@citrix

* Network performance improvements
  owner: wei@citrix

* Disk performance improvements

* Xen EFI feature: Xen can boot from grub.efi
 owner: Daniel Kiper
 status: in progress

* Default to credit2
 status: Probably not for 4.4
 - cpu pinning
 - NUMA affinity
 - cpu "reservation"

* xenperf
  prognosis: ?
  Owner: Boris Ostrovsky
  status: v2 patches posted

* Nested virtualization on Intel

* Nested virtualization on AMD

* Multi-vector PCI MSI (upstream Linux)
  owner: konrad@oracle

* xl: passing more defaults in configuration in xl.conf
  owner: ?
  There are a number of options for which it might be useful to pass a
  default in xl.conf.  For example, if we could have a default
  "backend" parameter for vifs, then it would be easy to switch back
  and forth between a backend in a driver domain and a backend in dom0.

* xl PVUSB pass-through for PV guests
* xl PVUSB pass-through for HVM guests
  owner: George
  status: ?
  xm/xend supports PVUSB pass-through to guests with PVUSB drivers
(both PV and HVM guests).
  - port the xm/xend functionality to xl.
  - this PVUSB feature does not require support or emulation from Qemu.
  - upstream the Linux frontend/backend drivers. Current
work-in-progress versions are in Konrad's git tree.
  - James Harper's GPLPV drivers for Windows include PVUSB frontend drivers.

* Xen EFI feature: pvops dom0 able to make use of EFI run-time
services (external)
 owner: Daniel Kiper
 status: Just begun

* New kexec implementation
  owner: dvrabel@citrix
  status: v7 posted, only minor updates expected

* libxl support for Remus
   @shriram
   status: memory checkpointing - done.
           network buffering - patches posted.
           disk replication support using DRBD - TODO

=== Clean-ups ===

* mac address changes on reboot if not specified in config file
  > Needs a robust way to "add" to the config

* ACPI WAET table vs RTC emulation mode
  owner: jan@suse
  prognosis: ?
 > An overly simplified fix was posted a while ago
 > (http://lists.xenproject.org/archives/html/xen-devel/2013-07/msg00122.html),
 > but Tim's objection is rather valid. I can't, however, estimate
 > if/when I would find time to learn what tools side changes are
 > necessary to accommodate a new HVM param, and hence this is
 > currently stalled.  The current solution (as of 3fa7fb8b ["x86/HVM:
 > RTC code must be in line with WAET flags passed by hvmloader"])
 > isn't desirable to be kept for 4.4.

* Polish up xenbugtool
  owner: wei.liu2@xxxxxxxxxx

* Sort out better memory / ballooning / dom0 autoballooning thing
 > Don't forget NUMA angle
 - Inaccurate / incomplete info from HV

* Implement Xen hypervisor dmesg log entry timestamps
 > https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3924048-implement-xen-hypervisor-dmesg-log-entry-timestamp
 > Request seems to be for a shorter stamp (seconds-only, rather than full date)

* Make network driver domains easier to set up / more useful
 - Make it easy to make a device assignable (in discussion)
 - Automatically start/shutdown (xendomains?)
 - Pause booting of other domains until network driver domain is up (necessary?)

* libxl: More fine-grained control over when to pass through a device
 > Some IOMMUs are secure; some are merely functional, some are not present.
 > Allow the adminitrator to set the default

* qxl
  > http://bugs.xenproject.org/xen/bug/11
  - Uninitialized struct element in qemu
  - Revert 5479961 to re-enable qxl in xl,libxl
  - Option in Xen top-level to enable qxl support in qemu tree
  - Fix sse2 MMIO issue
   - make word size arbitrary

* libxl config file

* libxl: Don't use RAW format for "URL"-based qdisks (e.g., rbd:rbd/foo.img)
  - Figure out whether to use a generic URL or have a specific type for each one
  - Check existence of disk file for all RAW

* acpi-related xenstore entries not propagated on migrate
 > http://www.gossamer-threads.com/lists/xen/devel/282466
 > Only used by hvmloader; only a clean-up, not a bug.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.