[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Notes for xen summit 2018 design session] Process changes: is the 6 monthly release Cadence too short, Security Process, ...
On 02/07/18 20:03, Lars Kurth wrote: > Non-html version: apologies > Lars > > From: Lars Kurth <lars.kurth@xxxxxxxxxx> > Date: Monday, 2 July 2018 at 19:00 > To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx> > Cc: "committers@xxxxxxxxxxxxxx" <committers@xxxxxxxxxxxxxx>, Rich Persaud > <persaur@xxxxxxxxx>, Doug Goldstein <cardoe@xxxxxxxxxx>, > "advisory-board@xxxxxxxxxxxxxxxxxxxx" <advisory-board@xxxxxxxxxxxxxxxxxxxx> > Subject: [Notes for xen summit 2018 design session] Process changes: is the 6 > monthly release Cadence too short, Security Process, ... > > # Topics to discuss > ### Release Cadence > 2 years ago, we moved to a 6 monthly release cadence. The idea was to help > companies getting > features into Xen in a more predictable way. This appears not to have worked. > At the same time, > the number of releases is creating problems for the security team and some > downstreams. I > wanted to collect views to kick-start an e-mail discussion. > ### Security Process: > See https://lists.xenproject.org/archives/html/xen-devel/2018-05/msg01127.html > ### Other changes that may be worth highlighting ... > > # Discussed topics > ### Release Cadence: 6 months vs 9 months > We compared the pros and cons of both models > | 6 months | 9 months > | > | ------ | ------ > | > | Large features have to be broken up into parts | Less of an issue with 9 > months | > | 4 months development 2-3 months hardening | 6 months development 3-4 > months hardening | > | Fixed release cycle, fairly predictable | In the past we slipped > these more than today | > | Security overhead | Less of a security > overhead | > | Positive benefits did not materialize. | > | > | Distros use a wider range of Xen versions | Distros 1.5 years out of > date | > > In terms of positive benefits: we have mainly been releasing on time, but > have less > predictability on what makes it into a release. Also, contributors frequently > miss their targeted > releases. > > We then had a discussion around why the positive benefits didn't materialize: > * Andrew and a few other believe that the model isn't broken, but that the > issue is with how we > develop. In other words, moving to a 9 months model will *not* fix the > underlying issues, but > merely provide an incentive not to fix them. Keeping the 6 month schedule still leads to a substantial burden for managing the stable releases. > * Issues highlighted were: > * 2-3 months stabilizing period is too long I absolutely agree. > * Too much start/stop of development - we should branch earlier (we mainly > do this on the last > RC now). The serial period of development has essentially become too > short. *Everyone* in the > room agreed that fixing this, is the *most important issue*. While I'm really in favor of branching early I fear that this will even raise the burden on some few developers who need to backport fixes to the just branched off release candidate. An approach to solve this would be to accept a development patch only in case it is accompanied by the release backport. > * Testing (aka OSSTEST) and CI automation is too fragile and we are too > slow. Do we need another > way of smoke testing which acts as a filter to OSSTEST and reduces the > number of issues that > block everyone - also see the Gitlab discussion > > (https://lists.xenproject.org/archives/html/xen-devel/2018-07/threads.html#00126) > Also, we could do testing using QEMU for most cases which are not > hardware specific for faster > turnaround as ViryaOS is planning on doing. This is slow, but can be > offloaded into the cloud > and could be triggered before a patch hits staging. Doug indicated he > could help get the needed > capacity for free. I really like automatic testing, but OSSTEST has its problems. A major source of the pain seems to be the hardware: About half of all cases where I looked into the test reports to find the reason for a failing test flight were related to hardware failures. Not sure how to solve that. Another potential problems showed up last week: OSSTEST is using the Debian servers for doing the basic installation. A change there (e.g. a new point release) will block tests. I'd prefer to have a local cache of the last known good set of *.deb files to be used especially for the branched Xen versions. This would rule out remote problems for releases. > * Testing for end-users is too hard: we should not require them to build > Xen - again with GitLab > as per previous discussion we could deliver unsigned RPMs and Debug > Builds for testing. If we > were able to do this, this should help > > Then there was a discussion on how we could make more downstreams coalesce > around the same Xen > releases. This discussion was driven by Doug G, but we didn't come to a > conclusion. If we were > able to do this, we could introduce a model similarly to Debian, were > downstreams could pull > together and fund a longer support period for a Xen version which the > majority of downstreams > use, without impacting the stable release maintainer. > > Doug mentioned to me afterwards, that there is a Xen Slack channel for > distros which is very > active, where we could test this idea. > > Another approach would be to follow a model as the Wayland and X11 > communities do, which have > different support periods for odd and even releases. They use the > odd-unstable/even-stable style > of versioning. This may be worth investigating further. I believe just leaving out every second release (i.e. doing one release per year) is a rough equivalent for that with less overhead. > Lars noted that I may make sense to arrange a community call to make more > progress on this discussion. Yes, please. I'd like to know how to do the next release. :-) Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |