[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH] xen: add libafl-qemu fuzzer support
Hi Stefano, Stefano Stabellini <sstabellini@xxxxxxxxxx> writes: > On Thu, 14 Nov 2024, Volodymyr Babchuk wrote: [...] >> +Building LibAFL-QEMU based fuzzer >> +--------------------------------- >> + >> +Fuzzer is written in Rust, so you need Rust toolchain and `cargo` tool >> +in your system. Please refer to your distro documentation on how to >> +obtain them. >> + >> +Once Rust is ready, fetch and build the fuzzer:: >> + >> + # git clone >> https://github.com/xen-troops/xen-fuzzer-rs >> + # cd xen-fuzzer-rs >> + # cargo build > > Is this the only way to trigger the fuzzer? Are there other ways (e.g. > other languages or scripts)? If this is the only way, do we expect it to > grow much over time, or is it just a minimal shim only to invoke the > fuzzer (so basically we need an x86 version of it but that's pretty much > it in terms of growth)? Well, original AFL++ is written in C. And I planned to use it initially. I wanted to make plugin for QEMU to do the basically same thing that LibAFL does - use QEMU to emulate target platform, create snapshot before running a test, restore it afterwards. But then I found LibAFL. It is a framework for creating fuzzers, it implements the same algorithms as original AFL++ but it is more flexible. And it already had QEMU support. Also, it seems it is quite easy to reconfigure it for x86 support. I didn't tried tested this yet, but looks like I need only to change one option in Cargo.toml. This particular fuzzer is based on LibAFL example, but I am going to tailor it for Xen Project-specific needs, like CI integration you mentioned below. As for test harness, I am using Zephyr currently. My first intention was to use XTF, but it is x86-only... I am still considering using XTF for x86 runs. Zephyr was just the easiest and fastest way to trigger hypercalls. At first I tried to use Linux kernel, but it was hard to cover all possible failure paths. Zephyr is much simpler in this regard. Even better is to use MiniOS or XTF. But ARM support in MiniOS is in sorry state and XTF does not work on ARM at all. [...] >> void call_psci_cpu_off(void) >> { >> +#ifdef CONFIG_LIBAFL_QEMU_FUZZER_PASS_BLOCKING >> + libafl_qemu_end(LIBAFL_QEMU_END_OK); >> +#endif > > I think we should add a wrapper with an empty implementation in the > regular case and the call to libafl_qemu_end when the fuzzer is enabled. > So that here it becomes just something like: > > fuzzer_success(); I considered this. In the next version I'll add fuzzer.h with inline wrappers. [...] >> @@ -1452,6 +1456,10 @@ static long do_poll(const struct sched_poll >> *sched_poll) >> if ( !guest_handle_okay(sched_poll->ports, sched_poll->nr_ports) ) >> return -EFAULT; >> >> +#ifdef CONFIG_LIBAFL_QEMU_FUZZER_PASS_BLOCKING >> + libafl_qemu_end(LIBAFL_QEMU_END_OK); >> +#endif > > I am not sure about this one, why is this a success? vCPU get blocked here basically forever. So test harness can't call libafl_qemu_end(LIBAFL_QEMU_END_OK) from it's side because it is never scheduled after this point. > Honestly, aside from these two comments, this looks quite good. I would > suggest adding a GitLab CI job to exercise this, if nothing else, to > serve as an integration point since multiple components are required for > this to work. I was considering this as well. Problem is that fuzzing should be running for a prolonged periods of time. There is no clear consensus on "how long", but most widely accepted time period is 24 hours. So looks like it should be something like "nightly build" task. Fuzzer code needs to be extended to support some runtime restriction, because right now it runs indefinitely, until user stops it. I am certainly going to implement this, but this is a separate topic, because it quires changes in the fuzzer app. Speaking on which... Right now both fuzzer and test harness reside in our github repo, as you noticed. I believe it is better to host it on xenbits as an official part of the Xen Project. -- WBR, Volodymyr
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |