[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Migration v2 and related work for 4.6
On Wed, Apr 01, 2015 at 12:03:51PM +0100, Andrew Cooper wrote: > Hello, > > I believe I have CC'd all interested parties. If I have missed anyone, > please shout. > > There are several large series: > * Cancellable async operations > * COLO > > and a wish to start experimenting with migration on ARM which all > interact with my large series for migration v2 support in libxc and libxl. > > I am working on the libxl side of things, but realise that I am being a > blockage to the other areas waiting on migration v2. > > The current state of the libxc bit is functionally complete. It is > shipping in two versions of XenServer, has not changed substantially in > 9 months, and not changed in a backwards incompatible way since v1. > > I propose that the libxc series be accepted independently of the libxl > series. The libxc series is still implemented as an API-compatible > alternative to legacy migration, and is not enabled by default, meaning > no change to current users. > > However, it will unblock development of ARM migration and COLO, and will > allow people in the wider community to experiment with migration v2. An > existing `xl migrate` can be swapped from legacy to migration v2 simply > by setting an environment variable (for PV guests. HVM requires the > libxl changed as the handling of the Qemu save record is changing.) > > I feel that this is best interest for the community, to get some testing > available earlier in the development cycle, rather than attempting to > splice 3 large series in a related area together towards the code freeze. > > Thoughts? (especially from a release management point of view) > I've looked at your libxc series. It looks complete. I agree it would be good to get it in as early as possible. Wei. > ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |