[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH 22/30] Code tagging based fault injection
On Wed, 31 Aug 2022 at 19:30, Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote: > > > From: Kent Overstreet <kent.overstreet@xxxxxxxxx> > > > > > > This adds a new fault injection capability, based on code tagging. > > > > > > To use, simply insert somewhere in your code > > > > > > dynamic_fault("fault_class_name") > > > > > > and check whether it returns true - if so, inject the error. > > > For example > > > > > > if (dynamic_fault("init")) > > > return -EINVAL; > > > > Hi Suren, > > > > If this is going to be used by mainline kernel, it would be good to > > integrate this with fail_nth systematic fault injection: > > https://elixir.bootlin.com/linux/latest/source/lib/fault-inject.c#L109 > > > > Otherwise these dynamic sites won't be tested by testing systems doing > > systematic fault injection testing. > > That's a discussion we need to have, yeah. We don't want two distinct fault > injection frameworks, we'll have to have a discussion as to whether this is > (or > can be) better enough to make a switch worthwhile, and whether a compatibility > interface is needed - or maybe there's enough distinct interesting bits in > both > to make merging plausible? > > The debugfs interface for this fault injection code is necessarily different > from our existing fault injection - this gives you a fault injection point > _per > callsite_, which is huge - e.g. for filesystem testing what I need is to be > able > to enable fault injection points within a given module. I can do that easily > with this, not with our current fault injection. > > I think the per-callsite fault injection points would also be pretty valuable > for CONFIG_FAULT_INJECTION_USERCOPY, too. > > OTOH, existing kernel fault injection can filter based on task - this fault > injection framework doesn't have that. Easy enough to add, though. Similar for > the interval/probability/ratelimit stuff. > > fail_function is the odd one out, I'm not sure how that would fit into this > model. Everything else I've seen I think fits into this model. > > Also, it sounds like you're more familiar with our existing fault injection > than > I am, so if I've misunderstood anything about what it can do please do correct > me. What you are saying makes sense. But I can't say if we want to do a global switch or not. I don't know how many existing users there are (by users I mean automated testing b/c humans can switch for one-off manual testing). However, fail_nth that I mentioned is orthogonal to this. It's a different mechanism to select the fault site that needs to be failed (similar to what you mentioned as "interval/probability/ratelimit stuff"). fail_nth allows to fail the specified n-th call site in the specified task. And that's the only mechanism we use in syzkaller/syzbot. And I think it can be supported relatively easily (copy a few lines to the "does this site needs to fail" check). I don't know how exactly you want to use this new mechanism, but I found fail_nth much better than any of the existing selection mechanisms, including what this will add for specific site failing. fail_nth allows to fail every site in a given test/syscall one-by-one systematically. E.g. we can even have strace-like utility that repeats the given test failing all sites in to systematically: $ fail_all ./a_unit_test This can be integrated into any CI system, e.g. running all LTP tests with this. For file:line-based selection, first, we need to get these file:line from somewhere; second, lines are changing over time so can't be hardcoded in tests; third, it still needs to be per-task, since unrelated processes can execute the same code. One downside of fail_nth, though, is that it does not cover background threads/async work. But we found that there are so many untested synchronous error paths, that moving to background threads is not necessary at this point. > Interestingly: I just discovered from reading the code that > CONFIG_FAULT_INJECTION_STACKTRACE_FILTER is a thing (hadn't before because it > depends on !X86_64 - what?). That's cool, though.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |