# Topics to discuss
### Release Cadence
2 years ago, we moved to a 6 monthly release cadence. The idea was to help companies getting
features into Xen in a more predictable way. This appears not to have worked. At the same time,
the number of releases is creating problems for the security team and some downstreams. I
wanted to collect views to kick-start an e-mail discussion.
### Security Process:
See https://lists.xenproject.org/archives/html/xen-devel/2018-05/msg01127.html
### Other changes that may be worth highlighting ...
# Discussed topics
### Release Cadence: 6 months vs 9 months
We compared the pros and cons of both models
| 6 months | 9 months |
| ------ | ------ |
| Large features have to be broken up into parts | Less of an issue with 9 months |
| 4 months development 2-3 months hardening | 6 months development 3-4 months hardening |
| Fixed release cycle, fairly predictable | In the past we slipped these more than today |
| Security overhead | Less of a security overhead |
| Positive benefits did not materialize. | |
| Distros use a wider range of Xen versions | Distros 1.5 years out of date |
In terms of positive benefits: we have mainly been releasing on time, but have less
predictability on what makes it into a release. Also, contributors frequently miss their targeted
releases.
We then had a discussion around why the positive benefits didn't materialize:
* Andrew and a few other believe that the model isn't broken, but that the issue is with how we
develop. In other words, moving to a 9 months model will *not* fix the underlying issues, but
merely provide an incentive not to fix them.
* Issues highlighted were:
* 2-3 months stabilizing period is too long
* Too much start/stop of development - we should branch earlier (we mainly do this on the last
RC now). The serial period of development has essentially become too short. *Everyone* in the
room agreed that fixing this, is the *most important issue*.
* Testing (aka OSSTEST) and CI automation is too fragile and we are too slow. Do we need another
way of smoke testing which acts as a filter to OSSTEST and reduces the number of issues that
block everyone - also see the Gitlab discussion
(https://lists.xenproject.org/archives/html/xen-devel/2018-07/threads.html#00126)
Also, we could do testing using QEMU for most cases which are not hardware specific for faster
turnaround as ViryaOS is planning on doing. This is slow, but can be offloaded into the cloud
and could be triggered before a patch hits staging. Doug indicated he could help get the needed
capacity for free.
* Testing for end-users is too hard: we should not require them to build Xen - again with GitLab
as per previous discussion we could deliver unsigned RPMs and Debug Builds for testing. If we
were able to do this, this should help
Then there was a discussion on how we could make more downstreams coalesce around the same Xen
releases. This discussion was driven by Doug G, but we didn't come to a conclusion. If we were
able to do this, we could introduce a model similarly to Debian, were downstreams could pull
together and fund a longer support period for a Xen version which the majority of downstreams
use, without impacting the stable release maintainer.
Doug mentioned to me afterwards, that there is a Xen Slack channel for distros which is very
active, where we could test this idea.
Another approach would be to follow a model as the Wayland and X11 communities do, which have
different support periods for odd and even releases. They use the odd-unstable/even-stable style
of versioning. This may be worth investigating further.
Also, there was a discussion about Ubuntu, which was struggling with their release model for
4 years, but they stuck with it and essentially fixed the underlying issues.
*ACTION:* Lars volunteered to put together a similar document as we did for Security, covering
the Release cycle. The summit discussion provided some interesting pointers and insights. As a
first step, we ought to get a clearer picture of the pain points within the release cycle that
we experience today.
### Security Process
*Batches and timing:* Everyone present, felt that informal batching is good (exception Doug G),
but that we should not move to a Patch Tuesday model. For that to work, we would need
significantly more man-power in the security team than we have now. In addition, it would also
imply to defer *critical issues* merely to hit an arbitrary date, which was perceived as bad
in particular by the downstream reps at the meeting. However, as in the xen-devel@ discussion,
there was no disagreement to codify batching as an option in our security policy, as long as
it is flexible enough.
Again, there was a sense that some of the issues we are seeing could be solved if we had better
CI capability: in other words, some of the issues we were seeing could be resolved by
* Better CI capability as suggested in the Release Cadence discussion
* Improving some of the internal working practices of the security team
* Before we commit to a change (such as improved batching), we should try them first informally.
E.g. the security team could try and work towards more predictable dates for batches vs. a
concrete process change
Note that we did not get to the stable baseline discussion: but it was highlighted that several
members of the security team also wear the hat of distro packagers for Debian and CentOS and
are starting to feel pain.
Lars noted that I may make sense to arrange a community call to make more progress on this discussion.