[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] xen: add libafl-qemu fuzzer support



On Tue, 19 Nov 2024, Volodymyr Babchuk wrote:
> Hi Stefano,
> 
> Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:
> 
> > On Thu, 14 Nov 2024, Volodymyr Babchuk wrote:
> 
> [...]
> 
> >> +Building LibAFL-QEMU based fuzzer
> >> +---------------------------------
> >> +
> >> +Fuzzer is written in Rust, so you need Rust toolchain and `cargo` tool
> >> +in your system. Please refer to your distro documentation on how to
> >> +obtain them.
> >> +
> >> +Once Rust is ready, fetch and build the fuzzer::
> >> +
> >> + # git clone
> >> https://github.com/xen-troops/xen-fuzzer-rs
> >> +  # cd xen-fuzzer-rs
> >> +  # cargo build
> >
> > Is this the only way to trigger the fuzzer? Are there other ways (e.g.
> > other languages or scripts)? If this is the only way, do we expect it to
> > grow much over time, or is it just a minimal shim only to invoke the
> > fuzzer (so basically we need an x86 version of it but that's pretty much
> > it in terms of growth)?
> 
> Well, original AFL++ is written in C. And I planned to use it
> initially. I wanted to make plugin for QEMU to do the basically same
> thing that LibAFL does - use QEMU to emulate target platform, create
> snapshot before running a test, restore it afterwards.
> 
> But then I found LibAFL. It is a framework for creating fuzzers, it
> implements the same algorithms as original AFL++ but it is more
> flexible. And it already had QEMU support. Also, it seems it is quite
> easy to reconfigure it for x86 support. I didn't tried tested this yet,
> but looks like I need only to change one option in Cargo.toml.
> 
> This particular fuzzer is based on LibAFL example, but I am going to
> tailor it for Xen Project-specific needs, like CI integration you
> mentioned below.

Is my understanding correct that we only need to invoke LibAFL as you
are doing already, and that's pretty much it? We need a better
configuration specific for Xen, and we need one more way to invoke it to
cover x86 but that's it? So, the expectation is that the code currently
under https://github.com/xen-troops/xen-fuzzer-rs will not grow much?


> As for test harness, I am using Zephyr currently. My first intention was
> to use XTF, but it is x86-only... I am still considering using XTF for
> x86 runs.
> 
> Zephyr was just the easiest and fastest way to trigger hypercalls. At
> first I tried to use Linux kernel, but it was hard to cover all possible
> failure paths. Zephyr is much simpler in this regard. Even better is to
> use MiniOS or XTF. But ARM support in MiniOS is in sorry state and XTF
> does not work on ARM at all.

There is a not-yet-upstream XTF branch that works on ARM here:
https://gitlab.com/xen-project/fusa/xtf/-/tree/xtf-arm?ref_type=heads


> [...]
> 
> >>  void call_psci_cpu_off(void)
> >>  {
> >> +#ifdef CONFIG_LIBAFL_QEMU_FUZZER_PASS_BLOCKING
> >> +    libafl_qemu_end(LIBAFL_QEMU_END_OK);
> >> +#endif
> >
> > I think we should add a wrapper with an empty implementation in the
> > regular case and the call to libafl_qemu_end when the fuzzer is enabled.
> > So that here it becomes just something like:
> >
> >   fuzzer_success();
> 
> I considered this. In the next version I'll add fuzzer.h with inline wrappers.
> 
> 
> [...]
> 
> >> @@ -1452,6 +1456,10 @@ static long do_poll(const struct sched_poll 
> >> *sched_poll)
> >>      if ( !guest_handle_okay(sched_poll->ports, sched_poll->nr_ports) )
> >>          return -EFAULT;
> >>
> >> +#ifdef CONFIG_LIBAFL_QEMU_FUZZER_PASS_BLOCKING
> >> +    libafl_qemu_end(LIBAFL_QEMU_END_OK);
> >> +#endif
> >
> > I am not sure about this one, why is this a success?
> 
> vCPU get blocked here basically forever. So test harness can't call
> libafl_qemu_end(LIBAFL_QEMU_END_OK) from it's side because it is never
> scheduled after this point.
> 
> > Honestly, aside from these two comments, this looks quite good. I would
> > suggest adding a GitLab CI job to exercise this, if nothing else, to
> > serve as an integration point since multiple components are required for
> > this to work.
> 
> I was considering this as well. Problem is that fuzzing should be
> running for a prolonged periods of time. There is no clear consensus on
> "how long", but most widely accepted time period is 24 hours. So looks
> like it should be something like "nightly build" task. Fuzzer code
> needs to be extended to support some runtime restriction, because right
> now it runs indefinitely, until user stops it.

We can let it run for 48 hours continuously every weekend using the
Gitlab runners


> I am certainly going to implement this, but this is a separate topic,
> because it quires changes in the fuzzer app. Speaking on which... Right
> now both fuzzer and test harness reside in our github repo, as you
> noticed. I believe it is better to host it on xenbits as an official
> part of the Xen Project.

Yes we can create repos under gitlab.com/xen-project for this, maybe a
new subgroup gitlab.com/xen-project/fuzzer



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.