Fuzzing101 with LibAFL - Part IV: Fuzzing LibTIFF

Nov 26, 2021 | 33 minutes read

Tags: fuzzing, libafl, rust, libtiff, qemu

Twitter user Antonio Morales created the Fuzzing101 repository in August of 2021. In the repo, he has created exercises and solutions meant to teach the basics of fuzzing to anyone who wants to learn how to find vulnerabilities in real software projects. The repo focuses on AFL++ usage, but this series of posts aims to solve the exercises using LibAFL instead. We’ll be exploring the library and writing fuzzers in Rust in order to solve the challenges in a way that closely aligns with the suggested AFL++ usage.

Since this series will be looking at Rust source code and building fuzzers, I’m going to assume a certain level of knowledge in both fields for the sake of brevity. If you need a brief introduction/refresher to/on coverage-guided fuzzing, please take a look here. As always, if you have any questions, please don’t hesitate to reach out.

This post will cover fuzzing libtiff in order to solve Exercise 4. The companion code for this exercise can be found at my fuzzing-101-solutions repository

Previous posts:

Quick Reference

This is just a summary of the different components used in the upcoming post. It’s meant to be used later as an easy way of determining which components are used in which posts.

  "Fuzzer": {
    "type": "StdFuzzer",
    "Corpora": {
      "Input": "OnDiskCorpus",
      "Output": "OnDiskCorpus"
    "Input": "BytesInput",
    "Observers": [
      "VariableMapObserver": {
        "coverage map": "EDGES_MAP",
    "Feedbacks": {
      "Pure": ["MaxMapFeedback", "TimeFeedback"],
      "Objectives": ["MaxMapFeedback", "CrashFeedback"]
    "State": {
      "StdState": {
        "metadata": ["Tokens"]
    "Launcher": {
      "Monitor": "MultiMonitor",
      "EventManager": "LlmpRestartingEventManager",
    "Scheduler": "IndexesLenTimeMinimizerScheduler",
    "Executors": [
      "QemuExecutor": {
        "QemuHelpers": ["QemuEdgeCoverageHelper", "QemuFilesystemBytesHelper", "QemuGPRegisterHelper", "QemuAsanHelper"]
    "Mutators": [
      "StdScheduledMutator": {
        "mutations": ["havoc_mutations", "token_mutations"]
    "Stages": ["StdMutationalStage"]


Before anything, I just want to thank all the awesome folks in the fuzzing discord. They’re incredibly knowledgeable and helped me immensely while working through this series of posts.

Welcome back! This post will cover fuzzing libtiff in the hopes of finding CVE-2016-9297 in version 4.0.6.

According to Mitre regarding CVE-2017-13028, the TIFFFetchNormalTag function in LibTiff 4.0.6 allows remote attackers to cause a denial of service (out-of-bounds read) via crafted TIFF_SETGET_C16ASCII or TIFF_SETGET_C32_ASCII tag values.

We’re going to switch it up this time and arbitrarily enforce some constraints on our session. We’re going to fuzz the tiffinfo binary, but we’re going to treat it as a blackbox binary, i.e. pretend we don’t have source code and only have the binary itself to work with. But wait, there’s more! We’re also going to compile it for a different architecture than our host machine. This will allow us to explore LibAFL from a binary-only fuzzing perspective.

Now that our goal is clear, let’s jump in!

Exercise 4 Setup

Just like our other exercises, we’ll start with overall project setup.


First, we’ll modify our top-level Cargo.toml to include the new project.


members = [

And then create the project itself.

cargo new exercise-4

Created binary (application) `exercise-4` package


Next, let’s grab our target library: libtiff


tar xf tiff-4.0.6.tar.gz
mv tiff-4.0.6 tiff
rm tiff-4.0.6.tar.gz

Once complete, our directory structure should look similar to what’s below.

├── Cargo.toml                                                   
├── src                                      
│   └──                               
└── tiff                                  
    ├── aclocal.m4                           

Like we’ve done in the past, let’s make sure we can build everything normally. We’ll start with creating our build directory.


mkdir build

Recall from the intro that we’re going to cross-compile for a different architecture. Specifically, we’ll be cross-compiling for the 64-bit arm architecture, aka aarch64. In order to do that, we’ll need an alternate gcc toolchain. The command to install the toolchain (for apt-based distros) is below.

sudo apt install gcc-aarch64-linux-gnu

After that, we use our aarch64 toolchain to compile libtiff.

cd tiff/
./configure --prefix="$(pwd)/../build/" --target aarch64-unknown-linux-gnu --disable-cxx --host x86_64-unknown-linux-gnu CC=aarch64-linux-gnu-gcc
make install

Once complete, our build directory will look like this:

ls -al ../build/

drwxrwxr-x 2 epi epi 4096 Nov 26 14:32 include
drwxrwxr-x 2 epi epi 4096 Nov 26 14:32 bin
drwxrwxr-x 4 epi epi 4096 Nov 26 14:32 share
drwxrwxr-x 3 epi epi 4096 Nov 26 14:32 lib

We can confirm that our build succeeded by checking for the architecture of our target binary in the bin folder.

file ../build/bin/tiffinfo

../build/bin/tiffinfo: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/, BuildID[sha1]=d10ed7cea8959c9f50fe97c2b552b093eec9fb57, for GNU/Linux 3.7.0, with debug_info, not stripped

That will do as a confirmation that we’re properly setup.


Once again, we’ll solidify all of our currently known build steps, along with a few standard ones, into our Makefile.toml.

# composite tasks
dependencies = ["clean-cargo", "clean-build-dir"]

command = "true"
args = []
dependencies = [

# clean up task
command = "cargo"
args = ["clean"]

command = "make"
args = ["-C", "tiff", "clean"]

command = "rm"
args = ["-rf", "build/"]

# build tasks
command = "cargo"
args = ["build", "--release"]

command = "mkdir"
args = ["-p", "corpus", "crashes", "build"]

command = "cp"
args = ["../target/release/exercise-4", "build/"]

cwd = "tiff"
script = """
./configure --prefix="${CARGO_MAKE_WORKING_DIRECTORY}/../build/" --target aarch64-unknown-linux-gnu --disable-cxx --host x86_64-unknown-linux-gnu CC=aarch64-linux-gnu-gcc

cwd = "tiff"
script = """
make install

We can perform a test run of our build task

cargo make build

And then see that we’re still building our targets correctly.

file ./build/bin/tiffinfo

../build/bin/tiffinfo: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/, BuildID[sha1]=d10ed7cea8959c9f50fe97c2b552b093eec9fb57, for GNU/Linux 3.7.0, with debug_info, not stripped

QEMU Setup

As noted previously, we’ll be treating tiffinfo as if we can’t compile it from source. That means we’ll need a way to inject instrumentation into the target. Additionally, we’re dealing with a 64-bit ARM target, which means we’ll need a way to execute non-native cpu instructions. In order to solve both of these problems, we’ll turn to QEMU! More specifically, we’re going to use LibAFL’s QEMU bindings, which recently got a very nice overhaul from @andreafioraldi.

According to the QEMU wiki, “QEMU is a generic and open source machine emulator and virtualizer”. We’re interested in QEMU’s user-mode emulation capability, which we’ll leverage to run our aarch64 binary on an x86_64 host machine. QEMU is able to run non-native binaries by executing the target (ARM) instructions using an emulated CPU. During emulated execution, QEMU captures the syscalls made by the target program and forwards them to our host’s kernel. The LibAFL bindings go a step further and, in addition to execution, use QEMU to insert instrumentation at (emulated) runtime. Knowing how we plan to solve our execution and instrumentation problems, let’s check out setting up QEMU.


One common issue when running non-native binaries via qemu-user is that of missing library dependencies. When emulating a dynamically linked aarch64 binary, the binary will expect a linker like /lib/ and libraries like and The binary will expect these dependencies to have been compiled for the same architecture for which it was compiled. Therein lies the crux of the issue: the binary expects ARM libraries that our x86_64 host doesn’t provide. We’ll fix this problem by using debootstrap. debootstrap is a tool which will install a Debian-based filesystem into a given subdirectory on an already running/installed operating system. Essentially, it creates an entire root filesystem. More importantly, it can build that filesystem with a different architecture’s libraries. We can easily create an aarch64 root filesystem with the following commands:

sudo apt update -y && sudo apt install debootstrap
mkdir jammy-rootfs
sudo debootstrap --arch=arm64 jammy jammy-rootfs/

The debootstrap command takes awhile to run, but once complete, results in an entire linux filesystem at the specified location, which is pretty slick.

ls -altr jammy-rootfs/
total 20484
drwxr-xr-x   2 root root     4096 Apr 19  2021 sys
drwxr-xr-x   2 root root     4096 Apr 19  2021 proc
drwxr-xr-x   2 root root     4096 Apr 19  2021 home
drwxr-xr-x   2 root root     4096 Apr 19  2021 boot
lrwxrwxrwx   1 root root        8 Dec 26 07:25 sbin -> usr/sbin

We can check a few things to make sure we have an aarch64-based rootfs.

file jammy-rootfs/lib/aarch64-linux-gnu/ 
jammy-rootfs/lib/aarch64-linux-gnu/ ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, BuildID[sha1]=5c21282c155fd5993099aacf76da8a6cf9176b3c, stripped
file jammy-rootfs/usr/lib/aarch64-linux-gnu/ 
jammy-rootfs/usr/lib/aarch64-linux-gnu/ ELF 64-bit LSB shared object, ARM aarch64, version 1 (GNU/Linux), dynamically linked, interpreter /lib/, BuildID[sha1]=ad13636ad72bcdff7c0f5fe32b97a4e6bb919a11, for GNU/Linux 3.7.0, stripped

Nice, we’ve got aarch64 versions of ld and glibc! That’s all we need to do for QEMU until we’re ready to run the target binary, so let’s keep it moving.


We’ll also take a moment to update our Makefile.toml. Whenever we compile our fuzzer and use the LibAFL QEMU bindings, two architecture specific shared objects will be created by libafl_qemu. To keep everything together, we’ll want to move those shared objects into our build folder.

To codify the movement of one of them into our build process, we just need to update the copy-project-to-build key. We’ll also need to ensure that the ASAN shared object is built with the cross compiler by specifying the CROSS_CC environment variable when building with cargo.

env = {CROSS_CC = "aarch64-linux-gnu-gcc"}

command = "cp"
args = [

With the target and QEMU ready to go, we’re ready to start writing our fuzzer!

Parser Setup

There used to be a whole section about creating a wholly separate crate from the module we wrote in part 3 here, but thankfully, that code was made a lot more robust and included into LibAFL!


With that included, we’ve got a parser we can reuse from here on out and is as simple as adding a line to our Cargo.toml to integrate *chef’s kiss*. Let’s keep it moving.

Fuzzer Setup

Ok, we have an aarch64 rootfs and a blackbox binary of the same architecture; now we can start gathering the requisite pieces of our fuzzer. Thankfully, the style of fuzzer we’ll be writing is mostly self-contained. We’ll be using LibAFL’s Launcher, which does essentially the same thing we’ve done with LlmpRestartingEventManager in previous fuzzers, but wrapped in a nicer interface. Also, we’ll be using libafl_qemu to deal with the QEMU related bits of the fuzzer.


We should take a moment to lay out our overall strategy. In order to figure out how to proceed, we need to do some analysis on the target.

We know that the CVE advisory cites the TIFFFetchNormalTag function as the cause of the issue. In my experience, that kind of information may or may not be accurate. If we dig a little deeper, the comment in the libtiff repo for the commit that fixes the problem reads:

in TIFFFetchNormalTag(), make sure that values of tags with TIFF_SETGET_C16_ASCII / TIFF_SETGET_C32_ASCII access are null terminated, to avoid potential read outside buffer in _TIFFPrintField().

So, the fix was applied in TIFFFetchNormalTag, but the problem is actually in _TIFFPrintField. Looking in binary ninja, using cross references, we can learn that in order to reach _TIFFPrintField, our code needs to take a path similar to what’s shown below.

└──❯ TIFFPrintDirectory
    └──❯ _TIFFPrintField

The tiffinfo function is only called from main.


We can see in main that the value returned from TIFFOpen is eventually passed into tiffinfo.


It’s relatively safe to assume that TIFFOpen is a wrapper around fopen (or similar), and that the opened file is passed into tiffinfo.

Looking at TIFFClientOpen, which is called by TIFFOpen, we can start at the return value to see if we can figure out what’s being returned. It looks like we’re interested in the x23 variable.


If we go back up to the start, we see a malloc call’s return value populating our variable of interest. The very next thing that happens is a call to memset on the same value. After memset, initial values are set at offsets into the malloc’d memory.


It appears as though TIFFClientOpen is not only going to read in the file, but will also parse it into a data structure. All of this gathered information will assist us in choosing how we go about fuzzing the target.

Fuzzing Strategy

Ok, we need to determine how we want to get input into tiffinfo. We know that it accepts some parsed data structure as its first argument. We have a few options for how we proceed.

We could:

  • run the target in gdb under normal conditions, set a breakpoint on tiffinfo, and dump the memory of the first arg to disk. The memdump could then be our starting point for mutation.
  • blindly throw data at tiffinfo’s first argument, and let coverage guidance figure out what the data should look like
  • execute a large chunk of main, allowing TIFFOpen to be called, and hooking the read syscall to pass mutated data down to tiffinfo

Of these three options, we’ll go with the third, since it’s apt to be a strategy we’re likely to reuse for other fuzzing scenarios.


As usual, we’ll start by adding our dependencies. The primary difference compared to other posts is the libafl_qemu crate (and the qemu_cli feature flag to turn on the cli parser discussed earlier, of course 😁), which provides those QEMU bindings we discussed earlier. Also, since we’re targeting an aarch64 binary, we need to turn on the aarch64 feature flag for the libafl_qemu crate.

libafl = { version = "0.10.1", features = ["qemu_cli"] }
libafl_qemu = { version = "0.10.1", features = ["aarch64"] }

The aarch64 feature flag just exposes the libafl_qemu::aarch64 module and brings the public parts of it into the top-level libafl_qemu namespace such as the Regs enum, which will be aarch64-specific as a result (snippet shown below).

// libafl_qemu/
// ════════════════════════════

#[cfg(cpu_target = "aarch64")]
pub mod aarch64;
#[cfg(all(cpu_target = "aarch64", not(feature = "clippy")))]
pub use aarch64::*;

That’s it for Cargo.toml, let’s move on.


Checking the LibTiff repo, we see that there are images provided under tiff/test/images/. Since our goal is to find CVE-2017-13028, which cites the TIFFFetchNormalTag function as its entrypoint, we’ll want to grab a few .tiff files for our corpus.

mkdir corpus
cp tiff/test/images/*.tiff corpus

After which, our corpus directory should look similar to what’s below.

-rw-r--r--  1 epi epi    166 Dec 27 07:19 logluv-3c-16b.tiff
-rw-r--r--  1 epi epi  12322 Dec 27 07:19 palette-1c-4b.tiff
-rw-r--r--  1 epi epi   3312 Dec 27 07:19 palette-1c-1b.tiff
-rw-r--r--  1 epi epi   3289 Dec 27 07:19 miniswhite-1c-1b.tiff
-rw-r--r--  1 epi epi   4068 Dec 27 07:19 minisblack-2c-8b-alpha.tiff
-rw-r--r--  1 epi epi  24001 Dec 27 07:19 minisblack-1c-8b.tiff
-rw-r--r--  1 epi epi  47733 Dec 27 07:19 minisblack-1c-16b.tiff
-rw-r--r--  1 epi epi  71470 Dec 27 07:19 rgb-3c-8b.tiff
-rw-r--r--  1 epi epi 142670 Dec 27 07:19 rgb-3c-16b.tiff
-rw-r--r--  1 epi epi  27576 Dec 27 07:19 quad-tile.jpg.tiff
-rw-r--r--  1 epi epi  25548 Dec 27 07:19 palette-1c-8b.tiff

As stated earlier, there’s not many external components for us this time around (no compiler, no harness.c, etc…). As a result, we’re ready to start writing the fuzzer (for real this time), so let’s get after it!

Writing the Fuzzer

For the following sections, keep in mind that we’re still examining each component, but will only cover new material in-depth. Components/code seen in previous posts will have a quick-reference description and a link to the original discourse.

Components: Corpus + Input


  • first-seen: Part 1
  • purpose: holds all of our current testcases in memory
  • why: an in-memory corpus prevents disk access and should improve the speed at which we manipulate testcases


  • first-seen: Part 1
  • purpose: location at which fuzzer solutions are stored
  • why: solutions on disk can be used for crash triage
let fuzzer_options = cli::parse_args();

let corpus_dirs = fuzzer_options.input.as_slice();

let input_corpus = OnDiskCorpus::new(fuzzer_options.output.join("queue"))?;

let solutions_corpus = OnDiskCorpus::new(fuzzer_options.output)?;

Component: Emulator


An Emulator provides the methods necessary to interact with the emulated target binary. We’ll use the init_with_asan helper function to add ASAN to our fuzzer.

let mut env: Vec<(String, String)> = env::vars().collect();

let emu = libafl_qemu::init_with_asan(&mut fuzzer_options.qemu_args, &mut env)?;

Once we have an instantiated Emulator, we’ll want to get it into the proper state before handing it off to the QemuExecutor. We’ll start the process by loading our fuzz target from disk using libafl_qemu’s EasyElf struct and then getting a pointer to the target’s main function.

let mut buffer = Vec::new();
let elf = EasyElf::from_file(emu.binary_path(), &mut buffer)?;

let main_ptr = elf.resolve_symbol("main", emu.load_addr()).unwrap();

Since we’re not interested in parsing command line arguments every time we execute the target with new input, we’ll run until we hit main, and then set our entrypoint to be past the getopt code by adding a static offset to our main pointer. The offset can be found using a disassembler (binary ninja shown below).


While we’re at it, we’ll grab an address near the end of main that will mark the end of our emulated execution.

While choosing a stopping point, we need to pay special attention to the optind variable. The optind variable is the index of the next argument that should be handled by the getopt function.

The for loop that we’re inserting ourselves into is trying to run a bunch of code for each file passed on the command line. If we allow optind to increment each time our fuzzer runs a testcase, the access into the argv array (argv[optind]) will happily walk into our environment variables and then eventually off into the wild blue yonder, causing a segfault (not the good kind).

If we look closely at the disassembly, we can see that before the return, the compiler has placed the increment/branch logic at the bottom of the loop. This means, we simply need to choose an offset prior to optind getting incremented.


In case you need a referesher on arm assembly, the first three instructions in the basic block are as follows:

  • ldr x0, [x27, #0xf88] - load the optind variable’s address into register x0
  • ldr w1, [x0] - dereference the optind variable’s address, and load it into w1
  • add w1, w1, #0x1 - increment the optind variable

There we go, main+0x144 will work for the end address.

Armed with those two offsets from main, we’ll set a breakpoint on the start address and emulate execution until we arrive there.

let adjusted_main_ptr = main_ptr + 0x178;
let ret_addr = main_ptr + 0x144;

unsafe { };

At this point, the emulator is paused, and won’t continue until we call .run in the harness (shown later). The state of the registers as they are now will be what’s captured in our QemuGPRegisterHelper (also shown later) as the ‘known good’ state. The QemuGPRegisterHelper will allow us to reset registers to these values from within the harness, effectively making this a persistent mode fuzzer (similar to using AFL_ENTRYPOINT).

Now that we’ve allowed the emulator to hit our first breakpoint, we’ll remove that breakpoint and place a new one at the address where we want execution to stop.


Finally, we’ll reserve some space for our BytesInput in memory. Reserving memory like this will allow us to manage it during calls to mmap and munmap.

let input_addr = emu.map_private(0, MMAP_SIZE, MmapPerms::ReadWrite).unwrap();

Component: Harness

Harness as a closure:

  • first-seen: Part 1.5
  • purpose: accepts bytes that have been mutated by the fuzzer and runs the emulated binary via the Emulator
  • why: allows us to capture outer scope and is what the QemuExecutor expects as its first argument (FnMut(Input) -> ExitKind)

Even though we’ve used a closure as our harness before, this one is a little different, in that we’re not just calling LLVMFuzzerTestOneInput with the BytesInput.

Thankfully, all our harness really needs to do is call Emulator::run() and allow execution to flow until it hits the ret_addr breakpoint we set earlier.

Unlike previous harnesses, the BytesInput value is taken care of by a QemuHelper that we’ll examine shortly, so there’s no need to do anything with it here. Additionally, another QemuHelper will take care of resetting registers to the same state they were in at the adjusted_main breakpoint. These two helpers greatly simplify our harness.

let mut harness = |_: &BytesInput| {
    unsafe {;


Component: Client Runner

The Client Runner is essentially the ‘main’ function for each client to run. The core code will look the same as our other fuzzers, but this time, it will be wrapped in a closure that will be passed to the Launcher for actual execution. The majority of the remaining components will be contained within this closure.

The parameters for the closure are Option<StdState>, LlmpRestartingEventManager, and usize. Those parameters are mostly managed by the Launcher and not really something we need to worry about.

let mut run_client = |state: Option<_>, mut mgr, _core_id| {
    // Component: Observer
    fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;


Component: Observer


  • first-seen: Part 1
  • purpose: augments the edge coverage provided by the StdMapObserver with a bucketized branch-taken counter
  • why: can distinguish between interesting control flow changes, like a block executing twice when it normally happens once


  • first-seen: Part 1
  • purpose: provides information about the current testcase to the fuzzer
  • why: track the start time and how long it took the last testcase to execute


The VariableMapObserver is similar to other MapObservers we’ve seen before, but uses a variable map size. The libafl_qemu::edges module re-exports the same EDGES_MAP and MAX_EDGES_NUM from libafl_targets, which means we’re using the sancov backend for instrumentation. The edges_map_mut_slice function is simply a convenience wrapper around the raw EDGES_MAP pointer.

let var_map_observer = unsafe {

let time_observer = TimeObserver::new("time");

Component: Feedback


  • first-seen: Part 1
  • purpose: determines if there is a value in the coverage map that is greater than the current maximum value for the same entry
  • why: decides whether a new input is interesting based on its coverage map


  • first-seen: Part 1
  • purpose: keeps track of testcase execution time
  • why: decides if the value of its TimeObserver is interesting, but can’t mark a testcase as interesting on its own


  • first-seen: Part 2
  • purpose: examines the ExitKind of the current harness’s run
  • why: decides if the current testcase is interesting based on whether the testcase resulted in an ExitKind::crash or not
let mut feedback = feedback_or!(
    MaxMapFeedback::tracking(&edges_observer, true, false),

let mut objective = feedback_and_fast!(

Component: State


  • first-seen: Part 1
  • purpose: stores the current state of the fuzzer
  • why: it’s basically our only choice at the moment


let mut state = state.unwrap_or_else(|| {
        &mut feedback,
        &mut objective,

We’ve covered StdState before, but this time, we’re adding some metadata to our state in the form of Tokens. If you’re familiar with AFL’s idea of dictionaries, then you’re in luck! Tokens cover the same concept, just with a new name. The new nomenclature was selected because the KEYs are ignored by fuzzers (AFL included) and can be omitted. The only part that ever mattered to the fuzzer was the VALUE, thus the name change to a token.

if state.metadata_map().get::<Tokens>().is_none() && !fuzzer_options.tokens.is_empty() {
    let tokens = Tokens::new().add_from_files(&fuzzer_options.tokens)?;

While we’re on the subject of tokens, let’s figure out what our token file will contain. There are plenty of resources out there for generating your own set of tokens, based on your target. Additionally, if we were using afl-clang-lto to compile our binary, we’d get a set of tokens extracted and integrated into our fuzzer for free!

All that’s cool, but we’re going to take the path of least resistance, based on our current set of circumstances. Instead of generating our own tokens, we’ll use a set that’s already available for the tiff file format.

This unofficial AFL repo’s dictionaries folder contains a few ready-made dictionaries (i.e. sets of tokens). We’ll grab the tiff.dict file and save it to disk in our tiff/ directory.

wget -O tiff/tiff.dict

Now, our fuzzer will have a set of tokens available during its mutation stages, which is pretty choice.

Component: Scheduler


  • first-seen: Part 1
  • purpose: contains corpus testcases
  • why: provides the backing queue for a corpus minimizer


  • first-seen: Part 1
  • purpose: the minimization policy applied to the corpus
  • why: prioritizes quick/small testcases that exercise all of the entries registered in the coverage map’s metadata
let scheduler = IndexesLenTimeMinimizerScheduler::new(QueueScheduler::new());

Component: Fuzzer


  • first-seen: Part 1
  • purpose: houses our other components
  • why: it’s basically our only choice at the moment
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);

Component: QemuHelper

The QemuFilesystemBytesHelper and syscall hook (discussed below) were derived from code provided by @andreafioraldi, who was instrumental in this post seeing the light of day!



We saw earlier that we’re essentially ignoring the mutated BytesInput that’s coming into our harness. That’s because our mutated input will be handled by our custom QemuHelper and our hooked syscalls (discussed next).

Our QemuHelper’s main purpose is to assist us with passing information/performing tasks that cross the divide between our harness closure and other parts of our code (the syscall hook, for instance). If the LibAFL authors didn’t provide this kind of solution, we’d be stuck using lazy_static or global mut’s in order to achieve the same result. QemuHelpers can be thought of as plugins for the QemuExecutor.

Our helper/plugin will store the buffer generated by calling BytesInput::target_bytes() and the address of our managed memory.

#[derive(Default, Debug)]
struct QemuFilesystemBytesHelper {
    bytes: Vec<u8>,
    mmap_addr: u64,

Next, we’ll implement the QemuHelper trait.

For every QemuHelper passed to QemuHooks, QemuHelper::init is called by QemuHooks::new.

We’ll use the call to QemuHelper::init to pass our syscall hook into QemuHooks::syscalls, which is the proper place to pass our hook. The hooks on the Emulator are ‘raw’ C hooks, and not what we’re looking for in this particular case.

Similar to QemuHelper::init, QemuHelper::pre_exec is called via a QemuHelperTuple. Each QemuHelper in the tuple can expect to have its pre_exec called on every fuzz iteration. The flow for QemuExecutor is (basically) as follows:

├──❯ pre_exec_all
├──❯ inner.run_target
└──❯ post_exec_all

We’ll use the pre_exec call to perform what would normally be placed at the beginning of the harness code. We’ll save off the buffer for use later in the syscall hook as well as ensure its length is within the size we specified when creating our managed memory.

impl<UI> QemuHelper<UI> for QemuFilesystemBytesHelper
    UI: UsesInput<Input = BytesInput>,
    fn init_hooks<QT>(&self, hooks: &QemuHooks<'_, QT, UI>)
        QT: QemuHelperTuple<UI>,
        hooks.syscalls(syscall_hook::<QT, UI>);

    fn pre_exec(&mut self, _emulator: &Emulator, input: &<UI as UsesInput>::Input) {
        let target = input.target_bytes();
        let mut buf = target.as_slice();

        if buf.len() > MMAP_SIZE {
            buf = &buf[0..MMAP_SIZE];



Now that we’ve seen one QemuHelper, the next shouldn’t be too difficult to step through. As noted earlier when looking at the harness and emulator, the QemuGPRegisterHelper is responsible for resetting registers to a known good state in its pre_exec method. Since we’ve already looked at how a QemuHelper works, this time we’ll just examine the implementation.

The register_state member is a vector of values representing each register’s saved value.

#[derive(Default, Debug)]
struct QemuGPRegisterHelper {
    register_state: Vec<u64>,

QemuGPRegisterHelper::new is responsible for creating a new instance and saving off all of the current registers. On the other hand, QemuGPRegisterHelper::restore will attempt to overwrite the emulator’s current register values with the values it saved off in new.

impl QemuGPRegisterHelper {
    fn new(emulator: &Emulator) -> Self {
        let register_state = (0..emulator.num_regs())
            .map(|reg_idx| emulator.read_reg(reg_idx).unwrap_or(0))

        Self { register_state }

    fn restore(&self, emulator: &Emulator) {
            .for_each(|(reg_idx, reg_val)| {
                if let Err(e) = emulator.write_reg(reg_idx as i32, *reg_val) {
                        "[ERR] Couldn't set register x{} ({}), skipping...",
                        reg_idx, e

Inside pre_exec we’ll simply call .restore(), which completes our QemuGPRegisterHelper logic.

impl<UI> QemuHelper<UI> for QemuGPRegisterHelper
    UI: UsesInput<Input = BytesInput>,
    fn pre_exec(&mut self, emulator: &Emulator, _input: &<UI as UsesInput>::Input) {

That’s it for our QemuHelpers, next we’ll look at our syscall hook.

Component: Syscall Hook

We’ve already covered registering the syscall hook in our helper’s init function, so now we can look at the implementation.

First up, we have the hook’s function signature. The syscall hook accepts a QemuHooks containing all of our QemuHelpers along with the Emulator, and the fuzzer’s State. In addition to those objects, it accepts an i32 representing the syscall number, and 8 u64’s that will be populated with the values in the corresponding registers.

fn syscall_hook<QT, UI>(
    hooks: &mut QemuHooks<QT, UI>, // our instantiated QemuHooks
    _state: Option<&mut UI>,
    syscall: i32,
    x0: u64,
    x1: u64,
    x2: u64,
    _: u64,
    _: u64,
    _: u64,
    _: u64,
    _: u64,
) -> SyscallHookResult
    QT: QemuHelperTuple<UI>,
    UI: UsesInput,

Once execution flows into the syscall hook, we’ll need to determine if the hooked syscall is one that we’re interested in.

For our purposes, we want to hook read for the reasons already discussed, but we also want to hook exit, exit_group, mmap, and munmap.

Since there are a few branches to look at, we’ll take them one at a time.

mmap hook

In the mmap hook, instead of allowing mmap to allocate memory, we want to return the address of the memory we created with the emu.map_private call during Emulator setup.

let syscall = syscall as i64;

if syscall == SYS_mmap {
    // man mmap
    //   void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset);
    //   The address of the new mapping is returned as the result of the call.
    let fs_helper = hooks


munmap hook

In our munmap hook, we’re simply checking to see if the address being unmapped is our managed memory location. If it is, we’ll return success, but leave the memory as-is. If it’s any other address, we’ll let the real munmap handle things.

else if syscall == SYS_munmap {
    // man munmap
    //   int munmap(void *addr, size_t length);
    //   On success, munmap() returns 0.  On failure, it returns -1, and errno is set
    let fs_helper = hooks

    if x0 == fs_helper.mmap_addr {
    } else {

read hook

The read syscall hook is the most complex, but it’s not too crazy. Even so, we’ll chunk it up a bit.

Just like the others, we’ll start by getting our QemuFilesystemBytesHelper instance.

else if syscall == SYS_read {
  // man read:
  //   ssize_t read(int fd, void *buf, size_t count);
  //   On  success, the number of bytes read is returned (zero indicates end of file)
  //   On error, -1 is returned, and errno is set appropriately.
  let fs_helper = hooks

Then, we’ll determine up to what offset into QemuFilesystemBytesHelper.bytes we’ll read.

  let current_len = fs_helper.bytes.len();

  let offset: usize = if x2 == 0 {
      // ask for nothing, get nothing
  } else if x2 as usize <= current_len {
      // normal non-negative read that's less than the current mutated buffer's total
      // length
  } else {
      // length requested is more than what our buffer holds, so we can read up to the
      // end of the buffer

Next, we’ll remove the bytes from the buffer using drain.

  let drained = fs_helper.bytes.drain(..offset).as_slice().to_owned();

After that, we’ll write the contents that we removed from the buffer into the address with which read was called.

  unsafe {
      hooks.emulator().write_mem(x1, &drained);

Finally, we’ll return the number of bytes we read from the buffer.

  SyscallHookResult::new(Some(drained.len() as u64))

exit* hook

For our final hook, we have the exit and exit_group syscalls. When either of the exit syscalls are called, we’ll call abort instead. Calling abort allows the fuzzer to catch the crash and restart, where as a call to exit would simply bork our efforts.

else if syscall == SYS_exit || syscall == SYS_exit_group {

All Other Syscalls

For any other syscall, we return SyscallHookResult::new(None). When we pass None to SyscallHookResult::new, it sets the skip_syscall member to true, meaning the original syscall will be allowed to execute normally.

else {

That’s all for syscalls, let’s press!

Component: Executor


  • first-seen: Part 1.5
  • purpose: sets a timeout before each target run
  • why: protects against slow testcases and can be used w/ other components to tag timeouts/hangs as interesting


In order to create a QemuExecutor, we first need to create a QemuHooks struct. The QemuHooks struct wraps all of the QemuHelpers and the Emulator as well as provides an api for performing different operations on the emulator via a plethora of hooks.

Notice that we’re passing in our QemuFilesystemBytesHelper and QemuGPRegisterHelper into QemuHooks::new. The QemuEdgeCoverageHelper is also passed in to our QemuHooks. The QemuEdgeCoverageHelper handles the hooks for instrumentation and hitmap tracing. Similarly, the QemuAsanHelper manages the moving parts around the actual ASAN implementation.

let mut hooks = QemuHooks::new(
        QemuAsanHelper::new(QemuInstrumentationFilter::None, QemuAsanOptions::None),

The QemuExecutor is an in-process executor backed by QEMU. The QemuExecutor wraps the InProcessExecutor, the QemuHooks struct created above, and all of the normal wrapped components we’d expect in an Executor. This gives us an executor that will execute a bunch of testcases within the same process, eliminating a lot of the overhead associated with a fork/exec or forkserver execution model.

We’ll wrap the QemuExecutor with a TimeoutExecutor in order to set a timeout before each run.

let executor = QemuExecutor::new(
    &mut hooks,
    &mut harness,
    tuple_list!(edges_observer, time_observer),
    &mut fuzzer,
    &mut state,
    &mut mgr,

let mut executor = TimeoutExecutor::new(executor, fuzzer_options.timeout);

Component: Mutator + Stage


  • first-seen: Part 1
  • purpose: schedules mutations internally
  • why: schedules one of the embedded mutations on each call


  • first-seen: Part 1
  • purpose: one step in the fuzzing process, operates on a single testcase
  • why: default mutational stage; pairs with a range of mutations that will be applied one-by-one (i.e. havoc)


The only difference in the code below, when compared to our first look at these components, is the addition of tokens_mutations. When calling .merge, we’re simply adding two additional Mutators to our normal havoc_mutations.

  • TokenInsert - Inserts a random token at a random position in the Input
  • TokenReplace - replaces a random part of the input with one of the tokens we loaded earlier
let mutator = StdScheduledMutator::new(havoc_mutations().merge(tokens_mutations()));

let mut stages = tuple_list!(StdMutationalStage::new(mutator));

Component: Monitor


  • first-seen: Part 1.5
  • purpose: displays cumulative and per-client fuzzer statistics
  • why: handles fuzzer introspection reporting for us
let monitor = MultiMonitor::new(|s| {
    println!("{}", s);

Component: Launcher


Our last component is the Launcher. A Launcher is responsible for spawning one or more fuzzer instances in parallel. The Launcher struct follows the builder pattern we saw when using ForkserverBytesCoverageSugar in part 3. Underneath the hood, Launcher is using our old friend LlmpRestartingEventManager.

Creating a Launcher is fairly simple, and shown below.

match Launcher::builder()
    .run_client(&mut run_client)
    Ok(()) => Ok(()),
    Err(Error::ShuttingDown) => {
        println!("Fuzzing stopped by user. Good bye.");
    Err(err) => panic!("Failed to run launcher: {:?}", err),

That’s our last component! If you’ve actually read all of this post, I’m happy for you, or I’m sorry, whichever makes more sense. Either way, thanks for sticking with me, we’re almost done.

Running the Fuzzer

At this point, we’ve wrapped up everything we need to run our fuzzer, so let’s get going!

Build the Fuzzer

Note: In upgrading from 0.8.1 to 0.10.1, i needed to add a file with the the line println!("cargo:rustc-link-arg=-ldw");. I don’t know if this is something everyone will need to do now, or if it’s just an oddity that my machine picked up sometime in the last year or so.

First, we’ll build everything using our cargo make build task.

cargo make build

Next, we need to grab the cross-compiled, architecture-specific libqemu that we alluded to earlier.

find ../target | grep


We’re interested in the last one.

cp ../target/debug/build/libafl_qemu-5be2e7fdb1fcf3f3/out/qemu-libafl-bridge/build/ build/

After building everything and copying the wayward .so, we’re left with our build directory looking something like this:

ls -al build/

drwxrwxr-x  2 epi epi     4096 Jan  9 06:00 bin
-rwxrwxr-x  1 epi epi 27712640 Jan  9 05:59 exercise-4
drwxrwxr-x  2 epi epi     4096 Jan  9 06:00 include
drwxrwxr-x  3 epi epi     4096 Jan  9 06:00 lib
-rwxrwxr-x  1 epi epi    82472 Jan  9 05:59
-rwxrwxr-x  1 epi epi 52204808 Jan  9 05:59
drwxrwxr-x  4 epi epi     4096 Jan  8 08:48 share

Commence Fuzzing!

Even with everything built, there’s still one thing we need to cover before we can kick off our fuzzer.

Since we’re hooking some of the syscalls related to filesystem operations, it would behoove us to have an input file that’s reasonable. For instance, the maximum size of our managed memory region is 2**20 or 1048576. Whenever our target calls glibc’s stat, we’d like it to return values that mostly make sense.

Also, the way we’ve structure the persistent loop in the fuzzer means that the target will continually call fopen, read, mmap, fstatat etc, all against the same file, over and over. Since we’re hooking read, the contents of the file doesn’t matter, but we could at least provide a file of the same size so that our input won’t be truncated by the target. To do that, we’ll just create a file of an appropriate size.

python3 -c "import pathlib; pathlib.Path('infile').write_bytes(b'\x00' * 2**20)"

ls -al infile
-rw-rw-r-- 1 epi epi 1048576 Jan  9 06:39 infile

Ok, now we’re ready to begin.

LD_LIBRARY_PATH=$(pwd)/build ./build/exercise-4 --cores 1-7 --tokens tiff/tiff.dict -- ./build/exercise-4 -L ../jammy-rootfs ./build/bin/tiffinfo -Dcjrsw infile
spawning on cores: Cores { cmdline: "1-7", ids: [CoreId { id: 1 }, CoreId { id: 2 }, CoreId { id: 3 }, CoreId { id: 4 }, CoreId { id: 5 }, CoreId { id: 6 }, CoreId { id: 7 }] }
child spawned and bound to core 1
child spawned and bound to core 2
child spawned and bound to core 3
child spawned and bound to core 4
child spawned and bound to core 5
child spawned and bound to core 6
child spawned and bound to core 7
[Stats       #3]  (GLOBAL) run time: 0h-0m-30s, clients: 8, corpus: 14, objectives: 0, executions: 3455573, exec/sec: 115.2k
                  (CLIENT) corpus: 2, objectives: 0, executions: 694845, exec/sec: 23.15k, edges: 160/220 (72%)
[Stats       #4]  (GLOBAL) run time: 0h-0m-30s, clients: 8, corpus: 14, objectives: 0, executions: 3798789, exec/sec: 126.6k
                  (CLIENT) corpus: 2, objectives: 0, executions: 689365, exec/sec: 22.97k, edges: 160/220 (72%)
[Stats       #5]  (GLOBAL) run time: 0h-0m-30s, clients: 8, corpus: 14, objectives: 0, executions: 4142467, exec/sec: 138.0k
                  (CLIENT) corpus: 2, objectives: 0, executions: 685901, exec/sec: 22.86k, edges: 160/220 (72%)
[Stats       #6]  (GLOBAL) run time: 0h-0m-30s, clients: 8, corpus: 14, objectives: 0, executions: 4486438, exec/sec: 149.3k
                  (CLIENT) corpus: 2, objectives: 0, executions: 689003, exec/sec: 22.95k, edges: 160/220 (72%)
[Stats       #7]  (GLOBAL) run time: 0h-0m-30s, clients: 8, corpus: 14, objectives: 0, executions: 4827201, exec/sec: 160.7k
                  (CLIENT) corpus: 2, objectives: 0, executions: 683297, exec/sec: 22.77k, edges: 160/220 (72%)
[Stats       #1]  (GLOBAL) run time: 0h-0m-45s, clients: 8, corpus: 14, objectives: 0, executions: 5171249, exec/sec: 115.0k
                  (CLIENT) corpus: 2, objectives: 0, executions: 1025229, exec/sec: 22.78k, edges: 160/220 (72%)
[Stats       #2]  (GLOBAL) run time: 0h-0m-45s, clients: 8, corpus: 14, objectives: 0, executions: 5522303, exec/sec: 122.8k
                  (CLIENT) corpus: 2, objectives: 0, executions: 1054663, exec/sec: 23.44k, edges: 160/220 (72%)
[Stats       #3]  (GLOBAL) run time: 0h-0m-45s, clients: 8, corpus: 14, objectives: 0, executions: 5874553, exec/sec: 130.5k
                  (CLIENT) corpus: 2, objectives: 0, executions: 1047095, exec/sec: 23.26k, edges: 160/220 (72%)
[Stats       #4]  (GLOBAL) run time: 0h-0m-45s, clients: 8, corpus: 14, objectives: 0, executions: 6218737, exec/sec: 138.2k
                  (CLIENT) corpus: 2, objectives: 0, executions: 1033549, exec/sec: 22.96k, edges: 160/220 (72%)
[Stats       #5]  (GLOBAL) run time: 0h-0m-45s, clients: 8, corpus: 14, objectives: 0, executions: 6560568, exec/sec: 145.7k
                  (CLIENT) corpus: 2, objectives: 0, executions: 1027732, exec/sec: 22.83k, edges: 160/220 (72%)


After letting the fuzzer churn a while, we can confirm that we’ve found a bug. Normally, the crash output would be in our log or printed to the terminal. Unfortunately, the target spews a ton of warning/error output during fuzzing, so I chose to send all that junk to /dev/null and cheat a bit on confirmation. I just compiled the target as x86_64 with ASAN… ¯\_(ツ)_/¯

TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.                                      
TIFFReadDirectory: Warning, Unknown field with tag 28 (0x1c) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 347 (0x15b) encountered.
TIFFReadDirectory: Warning, Wrong "StripByteCounts" field, ignoring and calculating from imagelength.
TIFF Directory at offset 0x67f4 (26612)                                                          
  Image Width: 512 Image Length: 384                                                             
  Tile Width: 128 Tile Length: 128                                                               
  Bits/Sample: 8                                                                                 
  Sample Format: unsigned integer                                                                
  Compression Scheme: None                                                                       
  Photometric Interpretation: YCbCr                                                              
  YCbCr Subsampling: 2, 2                                                                        
  Orientation: row 0 top, col 0 lhs                                                              
  Samples/Pixel: 3                                                                               
  Min Sample Value: 0      
  Max Sample Value: 255                       
  Planar Configuration: single image plane
  Reference Black/White:     
     0:     0   255          
     1:   128   255          
     2:   128   255          
==1092821==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000000b1 at pc 0x0000002afe32 bp 0x7ffdaf1de3d0 sp 0x7ffdaf1ddb90
READ of size 2 at 0x6020000000b1 thread T0
    #0 0x2afe31 in fputs (/home/epi/PycharmProjects/fuzzing-101-solutions/exercise-4/tiffinfo-x86+0x2afe31)
    #1 0x472eff in _TIFFPrintField /home/epi/PycharmProjects/fuzzing-101-solutions/exercise-4/tiff/libtiff/tif_print.c:127:4
    #2 0x472eff in TIFFPrintDirectory /home/epi/PycharmProjects/fuzzing-101-solutions/exercise-4/tiff/libtiff/tif_print.c:641:5
    #3 0x347b2f in tiffinfo /home/epi/PycharmProjects/fuzzing-101-solutions/exercise-4/tiff/tools/tiffinfo.c:449:2
    #4 0x3451fa in main /home/epi/PycharmProjects/fuzzing-101-solutions/exercise-4/tiff/tools/tiffinfo.c:152:6
    #5 0x7f0738ba0564 in __libc_start_main csu/../csu/libc-start.c:332:16
    #6 0x29669d in _start (/home/epi/PycharmProjects/fuzzing-101-solutions/exercise-4/tiffinfo-x86+0x29669d)


There we have it; we learned a lot about libafl_qemu, fuzzed an aarch64 target, wrote a cli parsing crate, and probably some other stuff. Go us! In the next post we’ll solve Exercise 5. I’m leaning toward exploring the python bindings next. If you have a strong preference for the next focus area, drop me a message (unless you’re that guy that asks for windows stuff… you know who you are 🙃)

Additional Resources

  1. Fuzzing101
  2. LibAFL
  3. fuzzing-101-solutions repository
  4. libtiff
  5. QEMU

comments powered by Disqus