Odd/scary RHEL 5 bug

Yesterday my colleague gave me a RHEL 5 VM disk image which failed to boot after converting it using the latest virt-v2v.  Because it booted before conversion but not afterwards, the fingers naturally pointed at something that we were doing during the conversion process. Which is not unusual as v2v conversion is highly complex.

The “GRUB _” prompt after conversion

The thing is that we don’t reinstall grub during conversion, but we do edit a few grub configuration files. Could editing grub configuration cause this error?

I wanted to understand what the grub-legacy “GRUB _” prompt means. There are lots and lots and lots of people reporting this bug (eg), but as is often the case I could find no coherent explanation anywhere of what grub-legacy means when it gets into this state. Lots of the blind leading the blind, and random suggestions about how people had rescued such machines (probably coincidentally), but no hard data anywhere. So I had to go back to first principles and debug qemu to find out what’s happening just before the message is printed.

Tip: To breakpoint qemu when the Master Boot Record (first sector) is loaded, do:

target remote tcp::1234
set architecture i8086
b *0x7c00

After an evening of debugging, I found that it’s the first sector (known in grub-legacy as “stage 1”) which prints the GRUB<space> message. (The same happens to be true of grub2). The stage 1 boot sector has, written into it at a fixed offset, the location of the /boot/grub/stage2 file, ie. the literal disk start sector and length of this file. It sends BIOS int $0x13 commands to load those sectors into memory at address 0x8000, and jumps there to start the stage 2 of grub. The boot sector is 512 bytes, so there’s no luxury to do anything except print 5 characters. It’s after the stage2 file has been loaded when all the nice graphical stuff happens.

Unfortunately in the image after conversion, the stage2 data loaded into memory was all zeroes, and that’s why the boot fails and you see GRUB<space><cursor> and then the VM crashes.

The mystery was how conversion could be changing the location of the /boot/grub/stage2 file so that it could no longer be loaded at the fixed offset encoded in the boot sector.

This morning it dawned on me what was really happening …

The new virt-v2v tries very hard to avoid copying any unused data from the guest, just to save time. No point wasting time copying deleted files and empty space. This makes virt-v2v very fast, but it has an unusual side-effect: If a file is deleted on the source, the contents of the file are not copied over to the target, and turn into zeroes.

It turns out if you take the source disk image and simply zero all of the empty space in /boot, then the source doesn’t boot either, even though virt-v2v is not involved. Yikes … this could be a bug in RHEL 5. Grub is generating a bootloader that references a deleted file.

This is where we are right now with this bug. It appears that a valid sequence of steps can make a RHEL 5 bootloader that references a deleted file, but still works as long as you never overwrite the sectors used by that file.

I have written a simple test script that you can download to find out if your RHEL ≤ 6 virtual machines could be affected by this problem. I’m interested if anyone else sees this. I ran the test over a selection of RHEL 3 – 5 guests, and could not find any which had the problem, but my collection is not very extensive, and there are likely to be common modes in how they were created.

The next steps will likely be to test a lot more RHEL 5 installs to see if this bug is really common or a strange one-off. I will also probably add a workaround to virt-v2v so it doesn’t trim the boot partition — the reason is that we cannot go back and fix old RHEL 5 installs, we have to work with them if they are broken. If it turns out to be a real bug in RHEL 5 then we will need to issue a fix for that.



Filed under Uncategorized

3 responses to “Odd/scary RHEL 5 bug

  1. Matt

    Heres a long shot. Possibly at some time the original media was found to be corrupt, and the original harddrive was been repaired & surface ‘remapped’. Later if the transcription method utilized was particularly low level perhaps the VM recieved the data without the map? Naw. Long shot 2: I recall at some point Xen went from halting initial boot for a subset of images. The ones I am thinking of were a subset of /Windows/ images, (Stay with me) but later a specific exception workaround to a stage1/ stage2 fail was added (around 2010/2011?) to look elsewhere. Obviously, we’re talking Linux. But none the less, consider a “special-case windows warped linux stage2” might succeed on the previous hypervisor (due to exception code really meant for windows workaround) and fail on kvm. A way to test: duplicate the image, overwrite from the thrice removed offset (with something else that works), see if it works on the old hypervisor.

  2. Just trying to walk through how this could happen.

    From memory, grub packages don’t stick the stage 2 files in /boot, invoking `grub-install` does. However, grub-install also rewrites the mbr. There shouldn’t be a situation where they don’t match unless grub-install failed part-way through.

    I haven’t seen grub referencing a deleted stage2, while a valid stage2 actually exists on disk elsewhere. Unless grub-install was targetted at a partition, removable disk, or other device when updated.

  3. Robert

    GRUB can do two things that don’t involve normal filesystem-referenced files: embedded configurations and using blocklists for stage2 (or other files). Both are relevant when working with devices that may not have a (grub-supported) filesystem, and both work ok even if there is a filesystem (if non-filesystem-referenced blocks are left alone).

    On a system where stage1.5 can’t be used, stage 2 must be loaded directly by blocklist. Even if 1.5 can be used, a grub-unsupported filesystem can still be booted from, it just requires all the grub files, the kernel, and the initrd to be loaded by blocklist.

    This was once common in server configurations, back when drives were small enough to be fully BIOS-addressable in boot mode. It had a reputation for being more stable, so the practice probably persisted past its justification.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.