[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Minios-devel] automation: Creating a patchwork instance to improve pre-commit build testing
On 24/07/2018, 10:06, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: >>> On 23.07.18 at 18:40, <lars.kurth@xxxxxxxxxx> wrote: > This does mean though that series which do not build or show other issues, > will likely not be reviewed until the tests pass. This would lessen the > burden on reviewers, as they will know whether the code submitted builds on a > wide array of environments. So how are dependencies between series intended to be dealt with? It is not uncommon for someone to say "applies only on top of xyz". The implication of "will likely not be reviewed until the tests pass" seems unsuitable to me in such a case. We should look at how this is done in communities which have systems in place that do some off-list verification of patches, such as qemu and linux (0 day test service). Obviously in such cases the test bot would return results for a fail. The sensible thing to do would be the following: * For the submitter of the patch to notify the reviewer(s) to highlight the test failure/dependency * For the reviewer to spot the dependency In any case, the reviewer would have to decide whether to review a series which cannot be automatically build tested off list at that stage. Thinking about it a bit more, there are also two places at which things can go wrong: a) Failure to apply the patch => this would probably be the most likely outcome with a dependency b) Failure to build => if there was a missing dependency then probably fail in ALL build environments In other words, there should be some tell-tales for this case, which can probably be highlighted in the bot results Regards Lars _______________________________________________ Minios-devel mailing list Minios-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/minios-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |