[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2 TODO List Update

2012/1/17 Ian Campbell <Ian.Campbell@xxxxxxxxxx>:
> An updated version of the list is below. Some items are marked "nice to
> have" meaning that we don't think they should block the release. I'll
> try and post an update on a semi-regular basis as and when things are
> completed.
> I think I have incorporated all the updates made in the last thread as
> well as correctly reflected the status of each item. Please let me know
> if something is out of date e.g. if progress has been made which I
> haven't reflected below.
> hypervisor, blockers:
> Â Â Â* round-up of the closing of the security hole in MSI-X
> Â Â Â Âpassthrough (uniformly - i.e. even for Dom0 - disallowing write
> Â Â Â Âaccess to MSI-X table pages). (Jan Beulich, after upstream qemu
> Â Â Â Âpatch series also incorporates the respective recent qemu-xen
> Â Â Â Âchange)
> Â Â Â* domctls / sysctls set up to modify scheduler parameters, like
> Â Â Â Âthe credit1 timeslice (and schedule rate, if that ever makes it
> Â Â Â Âin). (George Dunlap)
> Â Â Â* get the interface changes for sharing/paging/mem-events done and
> Â Â Â Âdusted so that 4.2 is a stable API that we hold to. (Tim Deegan,
> Â Â Â ÂAndres Lagar-Cavilla et al)
> Â Â Â Â Â Â Â* mem event ring management posted, seems close to going
> Â Â Â Â Â Â Â Âin.
> Â Â Â Â Â Â Â* sharing patches posted
> Â Â Â Â Â Â Â*
> tools, blockers:
> Â Â Â* libxl stable API -- we would like 4.2 to define a stable API
> Â Â Â Âwhich downstream's can start to rely on not changing. Aspects of
> Â Â Â Âthis are:
> Â Â Â Â Â Â Â* event handling (Ian Jackson, posted several rounds,
> Â Â Â Â Â Â Â Ânearing completion?)
> Â Â Â Â Â Â Â* drop libxl_device_model_info (move bits to build_info or
> Â Â Â Â Â Â Â Âelsewhere as appropriate) (Ian Campbell, first RFC sent)
> Â Â Â Â Â Â Â* add libxl_defbool and generally try and arrange that
> Â Â Â Â Â Â Â Âmemset(foo,0,...) requests the defaults (Ian Campbell,
> Â Â Â Â Â Â Â Âfirst RFC sent)
> Â Â Â Â Â Â Â* The topologyinfo datastructure should be a list of
> Â Â Â Â Â Â Â Âtuples, not a tuple of lists. (nobody currently looking
> Â Â Â Â Â Â Â Âat this, not 100% sure this makes sense, could possibly
> Â Â Â Â Â Â Â Âdefer and change after 4.2 in a compatible way)
> Â Â Â Â Â Â Â* Block script support -- can be done post 4.2? should be
> Â Â Â Â Â Â Â Ânice to have not blocker? Somewhere on IanJ's todo?

This is quite related to xl hotplug stuff, once the hotplug thing is
in, it should be trivial to add this.

> Â Â Â* xl to use json for machine readable output instead of sexp by
> Â Â Â Âdefault (Ian Campbell to revisit existing patch)
> Â Â Â* xl feature parity with xend wrt driver domain support (George
> Â Â Â ÂDunlap)
> Â Â Â* Integrate qemu+seabios upstream into the build (Stefano to
> Â Â Â Ârepost). No change in default qemu for 4.2.
> Â Â Â* More formally deprecate xm/xend. Manpage patches already in
> Â Â Â Âtree. Needs release noting and communication around -rc1 to
> Â Â Â Âremind people to test xl.
> hypervisor, nice to have:
> Â Â Â* solid implementation of sharing/paging/mem-events (using work
> Â Â Â Âqueues) (Tim Deegan, Olaf Herring et al)
> Â Â Â* A long standing issue is a fully synchronized p2m (locking
> Â Â Â Âlookups) (Andres Lagar-Cavilla)
> tools, nice to have:
> Â Â Â* Hotplug script stuff -- internal to libxl (I think, therefore I
> Â Â Â Âdidn't put this under stable API above) but still good to have
> Â Â Â Âfor 4.2? Roger Pau Monet was looking at this but its looking
> Â Â Â Âlike a big can-o-worms. (discussion on-going)
> Â Â Â* Upstream qemu feature patches:
> Â Â Â Â Â Â Â* Upstream qemu PCI passthrough support (Anthony Perard)
> Â Â Â Â Â Â Â* Upstream qemu save restore (Anthony Perard)
> Â Â Â* Nested-virtualisation (currently should be marked experimental,
> Â Â Â Âlikely to release that way? Consider nested-svm separate to
> Â Â Â Ânested-vmx. Nested-svm is in better shape)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.