[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] RFC: Version support policy
On 14.02.2022 22:50, George Dunlap wrote: >> On Aug 19, 2021, at 10:18 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >> On 13.08.2021 13:37, Ian Jackson wrote: >>> The current policy for minimum supported versions of tools, compilers, >>> etc. is unsatisfactory: For many dependencies no minimum version is >>> specified. For those where a version is stated, updating it is a >>> decision that has to be explicitly taken for that tool. >> >> Considering your submission of this having been close to a glibc >> version issue you and I have been discussing, I wonder whether >> "etc" above includes library dependencies as well. >> >> In any event the precise scope of what is meant to be covered is >> quite important to me: There are affected entities that I'm happy >> to replace on older distros (binutils, gcc). There are potentially >> affected entities that I'm less happy to replace, but at the time >> I did work my way through it for example for Python (to still be >> able to build qemu, the community of which doesn't appear to care >> at all to have their stuff buildable in older environments). The >> point where I'd be really in trouble would be when base platform >> libraries like glibc are required to be a certain minimum version: >> I'd then be (potentially severely) restricted in what systems I >> can actually test stuff on. > > The question here is, why would someone running a 10-year-old distro that’s > been out of support for 6 years want to run a bleeding edge version of Xen? > I understand wanting to run Xen 4.16 on (say) Ubuntu 18.04, but who on earth > would want to run Xen 4.16 on Ubuntu 14.04, and why? If such people exist, > is it really worth the effort to try to support them? I do this, for the very simple reason of wanting (needing) to be able to test a large range of Xen versions all on the same small set of hardware. Internally we're still maintaining versions back to at least 4.4; upon customer request we (I) may end up needing to even play with 4.0. >> In addition I see a difference between actively breaking e.g. >> building with older tool chains vs (like you have it in your >> README adjustment) merely a statement about what we believe >> things may work with, leaving room for people to fix issues with >> their (older) environments, and such changes then not getting >> rejected simply because of policy. > > Yes; I think the principle should be that we *promise* to keep it working on > the currently-supported releases of a specific set of distros (e.g., Debian, > Ubuntu, Fedora, SUSE, RHEL). Working on older versions can be best-effort; > if simple changes make it compatible with older versions, and aren’t too > burdensome from a code complexity point of view, they can be accepted. > > One of the issues however is build-time checks. If we have a build-time > check for version X, but only test it on X+10 or later, then the build may > break in strange ways when someone tries it on something in between. Well, because most people only test on "X+10 or later", is has been frequently me to run into issues with, in particular, old gcc versions. And I've been making fixes / workarounds for those. Hence I wouldn't consider the full range entirely untested. Obviously not every version in the range would see testing, unless we specifically arranged for doing so in, say, CI. > I think it’s too much effort to ask developers to try to find the actual > minimum version of each individual dependency as things evolve. Hmm. On one hand I agree that it may be a lot to ask for. Otoh I generally take the position that it is okay for advanced functionality to be unavailable unless certain dependencies are met, but that base functionality should be provided (almost) indefinitely far backwards. I did add "(almost)" because I think it is fair for a project to draw a baseline at the time it is founded. No-one would expect Xen to be possible to be built with K&R C compilers. Beyond that raising the baseline for any component needed for building needs to consider how difficult it is for people to meet that new requirement. Speaking for myself, I find it acceptable to build certain leaf components (binutils, gcc, make, etc, and I've even worked my way through building newer Python), but things get more hairy when e.g. a shared library needs replacement. (Prime example of the latter would be libelf, which Linux'es objtool depends upon being a half way recent version.) >> While generally I find Marek's proposal better to tie the baseline >> to distros of interest, in a way it only shifts the issue, I'm >> afraid. > > What do you mean “shifts the issue”? You mean shifts it from versions of > individual components to versions of distros? (Half a year later I first had to go back and check Marek's reply.) Yes. Individual component versions would then be inferred from distro versions. While I don't know how other distros handle this, in ours parts of the tool chain also used to get updated during the lifetime of a distro version (where I already mean to limit "version" to e.g. service packs). For binutils typically by simple replacing the older version, while for gcc typically by making a newer major version available as an option. In such cases it then of course becomes fuzzy what the "distro => component" mapping would be. > That’s why I think we should support only currently-supported distros. If > the distro’s maintainers don’t consider the distro worth supporting any more, > I don’t see why we should make the effort to do so. And "currently supported" ends when? After "normal" EOL, or at the end of what's often called LTS / LTSS? For the latter case: I've recently learned that we've gained further sub-classes of LTSS, with different life times. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |