[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Request a freeze exception for COLO in v4.6
> If I can take one example,
11/25 "tools/libxc: support to resume uncooperative > HVM
guests". Based on my current
understanding this is even a bugfix.
Sadly it > is not
quite ready (or at least, wasn't last night).
But with a few more days we > can
probably get much of this cleanup and preparatory work in. I solicited Wei
to give an 2-3 weeks exception time for the patch to be in much better shape,
however, people simply ignore request not from Citrix without any retrospection
to see where is the problem. I already heard many "believes" and
"commitments" from people with the hat of maintainers and managers, saying that he/she should be able to complete the dependency patch
couple weeks before the deadline during Xen Hackathon, and agreed to raise the
priority to review the patches. But was any of those commitment really made? Be aware this
pushing effort has been lasting for 2 years already, it is not this week, this month,
this quarter, even not only this year. Resorting this
to one or two technical issue simply doesn't work. People with the maintainer's
hat on can arbitrary re-version the existing APIs for their hobby, to block all
the follow-up patches for entire release cycles, which is one entire year. In
the XEN/COLO case, it is 2 years for similar reasons. > > Assuming
nothing goes horribly wrong, I am very optimistic about COLO making > it upstream
soon, just not in 4.6. > I heard similar word one year ago too, eveything is repeating without any retrospection: everything Citrix likes, they can happen at any time anywhere ... Everything else, have to resort to the Citrix maintainers to has redundant bandwidth. From a former 11+ years experience Xen community developer, Xen spot light, and Xen active participant. Eddie Dong _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |