[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] xen: add libafl-qemu fuzzer support


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Tue, 19 Nov 2024 15:16:56 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S0esZiHgRpQ5BIk+owbZ3pSV+3GbEw3mv/LcZeWMX50=; b=S9viVa6y1AGpoD4thbiqOyE3Nz1N7YtDDfSmPzOwaflTg20ms0OkUbJYaA7kdRabmUdNolW5CIc2g4cV2Hpe4QQI+bIULPaBJxPNHxqkHvl3HrZN18qUB/SldRgZkFGLREt2tcIV4mgALK5MLTQadhW/Nq8Vr+LcYJBfqmvI6rsLxd9ng6okyA4B6zUvNYgF5aCNLBfi+zw9Yu6KO+TXQjmojJMlwZx9CYClbBB7S4fmScBvyM9BWO+YIhV6PaDzPCUAqiwyvO/+l/XeuXVKAwZ1m19runuyiMYZn4/m1aLKX5lJBg2/45TwtL9GWt7R43zLPrAg5+VglSzX78QxHg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=lInCq9fRawhins7fXhgf3HL7XEEssozG9oPaTdu+sKm/sTtpqXHE0BYMov2kODClFxQtwDgHFb8uLdNWrlAzqabn8W1U/slwbWnIkeFe6mWbgKXbEHGME66CV3te8yne07i7Q+pQatoClFjI2tDQ3dBjxj4g0zz0kEiTJ+EcY3QzUi7AuBdZcaEfW62dlLYNLnidqkObzXV/TmkutgRSnBh7X1ZQhkPuCDij06pf6Qai2cgT4RoVzN8L9EC7QBF3TJBK3aUHLadqRNoUuHNVuOH3qCdOzP8hHUlhTZlSnm5ECB84J44FGFTiClBf18zsHgS3/W/E8aSlKgzZHo2C6A==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, George Dunlap <gwd@xxxxxxxxxxxxxx>
  • Delivery-date: Tue, 19 Nov 2024 15:17:26 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHbNucXRmUX3Y7lRUGcS7d55ZSXhw==
  • Thread-topic: [RFC PATCH] xen: add libafl-qemu fuzzer support

Hi Stefano,

Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:

> On Thu, 14 Nov 2024, Volodymyr Babchuk wrote:

[...]

>> +Building LibAFL-QEMU based fuzzer
>> +---------------------------------
>> +
>> +Fuzzer is written in Rust, so you need Rust toolchain and `cargo` tool
>> +in your system. Please refer to your distro documentation on how to
>> +obtain them.
>> +
>> +Once Rust is ready, fetch and build the fuzzer::
>> +
>> + # git clone
>> https://github.com/xen-troops/xen-fuzzer-rs
>> +  # cd xen-fuzzer-rs
>> +  # cargo build
>
> Is this the only way to trigger the fuzzer? Are there other ways (e.g.
> other languages or scripts)? If this is the only way, do we expect it to
> grow much over time, or is it just a minimal shim only to invoke the
> fuzzer (so basically we need an x86 version of it but that's pretty much
> it in terms of growth)?

Well, original AFL++ is written in C. And I planned to use it
initially. I wanted to make plugin for QEMU to do the basically same
thing that LibAFL does - use QEMU to emulate target platform, create
snapshot before running a test, restore it afterwards.

But then I found LibAFL. It is a framework for creating fuzzers, it
implements the same algorithms as original AFL++ but it is more
flexible. And it already had QEMU support. Also, it seems it is quite
easy to reconfigure it for x86 support. I didn't tried tested this yet,
but looks like I need only to change one option in Cargo.toml.

This particular fuzzer is based on LibAFL example, but I am going to
tailor it for Xen Project-specific needs, like CI integration you
mentioned below.

As for test harness, I am using Zephyr currently. My first intention was
to use XTF, but it is x86-only... I am still considering using XTF for
x86 runs.

Zephyr was just the easiest and fastest way to trigger hypercalls. At
first I tried to use Linux kernel, but it was hard to cover all possible
failure paths. Zephyr is much simpler in this regard. Even better is to
use MiniOS or XTF. But ARM support in MiniOS is in sorry state and XTF
does not work on ARM at all.

[...]

>>  void call_psci_cpu_off(void)
>>  {
>> +#ifdef CONFIG_LIBAFL_QEMU_FUZZER_PASS_BLOCKING
>> +    libafl_qemu_end(LIBAFL_QEMU_END_OK);
>> +#endif
>
> I think we should add a wrapper with an empty implementation in the
> regular case and the call to libafl_qemu_end when the fuzzer is enabled.
> So that here it becomes just something like:
>
>   fuzzer_success();

I considered this. In the next version I'll add fuzzer.h with inline wrappers.


[...]

>> @@ -1452,6 +1456,10 @@ static long do_poll(const struct sched_poll 
>> *sched_poll)
>>      if ( !guest_handle_okay(sched_poll->ports, sched_poll->nr_ports) )
>>          return -EFAULT;
>>
>> +#ifdef CONFIG_LIBAFL_QEMU_FUZZER_PASS_BLOCKING
>> +    libafl_qemu_end(LIBAFL_QEMU_END_OK);
>> +#endif
>
> I am not sure about this one, why is this a success?

vCPU get blocked here basically forever. So test harness can't call
libafl_qemu_end(LIBAFL_QEMU_END_OK) from it's side because it is never
scheduled after this point.

> Honestly, aside from these two comments, this looks quite good. I would
> suggest adding a GitLab CI job to exercise this, if nothing else, to
> serve as an integration point since multiple components are required for
> this to work.

I was considering this as well. Problem is that fuzzing should be
running for a prolonged periods of time. There is no clear consensus on
"how long", but most widely accepted time period is 24 hours. So looks
like it should be something like "nightly build" task. Fuzzer code
needs to be extended to support some runtime restriction, because right
now it runs indefinitely, until user stops it.

I am certainly going to implement this, but this is a separate topic,
because it quires changes in the fuzzer app. Speaking on which... Right
now both fuzzer and test harness reside in our github repo, as you
noticed. I believe it is better to host it on xenbits as an official
part of the Xen Project.

--
WBR, Volodymyr



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.