[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Request a freeze exception for COLO in v4.6
Hello, I would like to request a freeze exception for COLO. 1.The benefits * COLO is a VM fault tolerance solution. Currently Remus provides this function, but it buffers all output packets, the latency is unacceptable. COLO addresses the problem, it can greatly improve the VM availability. * There are 3rd parties interested in this feature, for example Huawei, they already released Xen/COLO in their product with offline patches. * If this could go into Xen4.6, it will greatly speed up the production use of this feature as well as the development. Which could be a huge benefits to end-users who wants a VM FT solution to provide "non-stop services" 2.The risks I would say there should be no risk for including this feature into Xen4.6. Because this feature is only a bolt-on. It should sits in it's own corner with no harm to the existing code. On the contrary, it improves the existing code with quite a lot refactoring. 3.Further maintenance Intel and Fujitsu will maintain the code in the future, the feature goes into upstream does not mean the end of our development, instead it is the very start of our development, we will continue to improve the feature, fix bugs and so on. 4.Current status The Libxl migration v2 which we depend on should be merged soon, maybe today. There're 2 series on list, One is the prepare patchset, it is mostly refactoring and bugfix, 6/25 been acked. I would say most of the patchset are ready to be merged. Another is the main COLO series, it still needs to be carefully reviewed, but given the fact that this feature is only a bolt-on and we will continue to improve the code, it should be no harm to merge in the near future. I'm confident that we could get this ready in the following 1~2 weeks. -- Thanks, Yang. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |