Half-baked ideas: verify the integrity of VMs

For more half-baked ideas, see my ideas tag.

This LWN article on OSSEC reminded me of an idea I had. We need a verify tool that can verify your VMs are not corrupted and don’t contain a rootkit. You can currently run a simple rpm -V command or one of the tools listed in that LWN article, but the problem is you have to run those commands inside the VM, thus relying on the VM itself not to have been corrupted. (You can also reboot the VM into a known-good state, eg. from a rescue ISO, but then you get downtime).

Obviously the answer is to examine the VM from the host, using libguestfs to grab the checksums directly from the filesystem.

You can get the checksums easily this way, but what do you compare them against and how do you know the checksums are good?

You would need to ask the distribution for a list of known-good checksums for the packages they publish. In fact you can do this reasonably easily. That information is available in the raw RPMs, or Red Hat’s RHN, and I’m quite sure you can get it from the Debian repos too. Windows? I don’t know specifically, but I guess either Microsoft publish this or you could derive it in some way.

So now if you are presented with some file from the VM and its checksum, like:

e2ed3c7d6d429716173fbd2d831d6e2855f1d20209da1238f75d1892a3074af5 /sbin/ldconfig

in theory we can verify this file was distributed in the signed package glibc-2.11.1-4.x86_64.rpm built on Thu 18 Mar 2010 04:51:51 PM GMT by a Fedora builder. (Using virt-inspector you can work out that the VM is a Fedora instance).

There’s still a subtle problem with this which I can’t work out. What happens if the attacker doesn’t directly install a rootkit, but instead replaces a binary with a version which has a known vulnerability. Say, a known root exploit in a package which the distro had previously shipped and later found to be vulnerable? This would allow the attacker to revisit the machine and acquire root, and run any software they want entirely in memory (libguestfs only sees the disk). For that you’d need not just a big database of all the files that distributions have shipped, but a list of files that have been obsoleted for security reasons.

There’s also a second problem: Although we can enumerate the goodness (all files are files that the distribution has shipped), we don’t know what to do with all the other files. Like user files, configuration files, /tmp, or files that just shouldn’t be there. To detect rootkits amongst those files, it’s starting to look like we’d have to enumerate badness and we don’t want to go there.

About these ads

5 Comments

Filed under Uncategorized

5 responses to “Half-baked ideas: verify the integrity of VMs

  1. Rik

    I think there’s also the problem of prelinking. You would have to undo the prelinking before creating the checksum so you can verify it. I think the rpm -V inside the VM does this when it notices that prelinking is enabled.

  2. Yaniv

    Windows – file are digitally signed (by a MS certificate). You’ll need to verify their signature.
    Regretfully, not all files are digitally signed (but at least the OS system files all are, hopefully).

    Both Linux and Windows – you’ll need to save the list of your ‘currently-known-safe-versions-of-files’ in order to verify safe files (white-listing, instead of black-listing).
    For Windows, btw – http://www.bit9.com/

  3. Geert

    /sbin/prelink has a –sha and –md5 option to create a checksum that is invariant under prelinking so that could be solved.

    The other problem about what to do with files that exist but have no checksum is interesting. Disallowing them all is obviously not going to work. One way that may work to do it is introduce a concept of “sensitive directories” and “sensitive files:. A file in a sensitive directory (e.g. /etc/profile.d) would need valid checksum. An example of a sensitive file would be any executable that is resolvable in root’s $PATH, and exists anywhere else in root’s $PATH, e.g. a rogue /sbin/ls that overrides /bin/ls. It could be tricky creating an exhaustive list of all sensitive directories and sensitive files, but at least it’s not enumerating badness.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s