Tag Archives: fuzz-testing

Fuzz-testing libguestfs inspection code

There are a lot of security issues with dealing with untrusted disk images especially since for historical reasons a lot of the code used to parse filesystems sits in the kernel. Libguestfs avoids these by wrapping the kernel code inside a VM (and that VM inside an sVirt container if you’re using Fedora or RHEL).

However the library side of things could still be vulnerable, especially complicated operations like inspection. Last week we found several vulnerabilities in inspection which could allow an untrusted guest to perform a denial of service attack on a host.

The first vulnerability was identified by Coverity. The second was found by Olaf Hering by looking at similar code paths.

This made me wonder if we could find more inspection bugs semi-automatically. To do this I’ve written an inspection fuzz tester.

The idea is we run inspection on an empty disk image. Normally this wouldn’t find any operating systems. But we intercept certain libguestfs calls (which happen as a side-effect of inspection) and use them to create fake operating system files on the fly.

To give you an example: Inspection might look for a file called /etc/redhat-release and then try to parse it. To do this it will first test if the file exists (guestfs_is_file ("/etc/redhat-release")) and if it does read it. In the empty disk this file won’t exist, but we capture the is_file call, randomly create a file, and then see what happens when inspection tries to parse it.

Libguestfs has a trace mechanism but if we decided to do this sort of thing regularly we’d probably want to add a cleaner way to find the arguments and perhaps even replace the return value from a method call.

The result is a fuzz tester which now runs as part of the ordinary test suite.

I also ran many tens of thousands of iterations over the weekend. The test found Olaf Hering’s bug, which is encouraging, but it didn’t find any other bugs, which means there is room for refinement of the test. In particular I think we could push more malformed registry hives at the inspection code to see what it does.


Leave a comment

Filed under Uncategorized

hivex 1.2.5 released

The latest version of hivex — the library for extracting and modifying Windows Registry hive files has been released. You can get the source from here.

I spent a lot of time examining real hive files from Windows machines and running the library under the awesome valgrind tool, and found one or two places where a corrupt hive file could cause hivex to read uninitialized memory. It’s not clear to me if these are security issues — I think they are not — but everyone is advised to upgrade to this version anyway.

hivex would be a great candidate for fuzz testing if anyone wants to try that.

Leave a comment

Filed under Uncategorized

Half-baked ideas: feedback-directed fuzz testing of filesystems

For more half-baked ideas, see my ideas tag

Fuzz-testing (a.k.a random testing) is an automatic method of testing where you feed in random data and try to make the program crash.

You might, for example, write random bytes to a disk, then ask Linux to mount it. This is not very sophisticated since Linux is unlikely to find a filesystem signature in pure random bytes, so the test will almost always fail to test anything. A better approach is to take an existing disk image and write random bits to parts of it. This is what Steve Grubb’s fsfuzzer does, but it’s still not the state of the art, although Steve found several bugs with it.

Suppose we think of a filesystem as just an array of integers:

4, 5, 13, 0, 2, ...

The code in the kernel to read this filesystem might start off:

if ((fs[0] & 3) != 0) {
  printk ("invalid filesystem");
  return -EINVAL;

If you generate random filesystems, then only 1 in 4 filesystems will get past this first test in the code, so only 1 in 4 of your random tests is testing anything beyond the first statement.

A better approach is to use feedback from the kernel code to evolve your fuzz tests. You score each fuzz test based on how far into the kernel code it gets, then you use that score to evolve your tests using standard genetic algorithm techniques. The idea is that your fuzz tests evolve to test more and more of the kernel code you are interested in testing, instead of just randomly falling down at the first hurdle.

(This technique is well-known in research as feedback-directed fuzz testing, feedback-directed random testing, or evolutionary fuzz testing. As far as I can tell no one is using it on Linux.)

Here’s the half-baked idea: Use systemtap probes as a way to score the fuzz-tests.

We write a systemtap module which inserts probes at every possible point in the file we are interested in testing, say, fs/minix/*.c. These probes just print a simple “I am here” type message.

When we run our fuzz test, we score it according to how much output it produces (by measuring “dmesg” before and after the test). Hitting probe points scores 1. Causing a kernel oops scores a lot more. You’d probably want to do this in a VM, rather than on your host kernel …

Starting with real, but small filesystems, we randomly fuzz them to generate our initial test cases, then evolve those according to how much they score. The test cases are individually tested using the same method as Steve’s fsfuzzer — ie. mounting the filesystem and running through a few simple system calls (read the directory, read files, read extended attrs ..)

Test cases which hit the equivalent of the 1-in-4 problem above are quickly evolved out, and the test cases we are left with hopefully explore and test more of the code.

Leave a comment

Filed under Uncategorized