Tag Archives: virt-filesystems

Extracting filesystems from guest images, reconstructing guest images from filesystems, part 2

As discussed previously you can use libguestfs to extract raw filesystem content from a disk image.

The second part of our LVM to non-LVM converter involves a utility called virt-implode which takes the filesystem images and creates a new disk image with partitions containing the image content. It’s best to run this program using LIBGUESTFS_TRACE=1 so you can easily see what it’s doing.

$ LIBGUESTFS_TRACE=1 ./virt-implode.pl \
    sda1.img VolGroup00_LogVol00.img VolGroup00_LogVol01.img \
libguestfs: trace: add_drive "output.img" "format:raw"
libguestfs: trace: add_drive = 0
libguestfs: trace: launch
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: trace: launch = 0
libguestfs: trace: part_init "/dev/sda" "gpt"
libguestfs: trace: part_init = 0
libguestfs: trace: part_add "/dev/sda" "p" 2048 210943
libguestfs: trace: part_add = 0
libguestfs: trace: part_add "/dev/sda" "p" 210944 14694399
libguestfs: trace: part_add = 0
libguestfs: trace: part_add "/dev/sda" "p" 14694400 16726015
libguestfs: trace: part_add = 0
libguestfs: trace: upload "sda1.img" "/dev/sda1"
libguestfs: trace: upload = 0
libguestfs: trace: upload "VolGroup00_LogVol00.img" "/dev/sda2"
libguestfs: trace: upload = 0
libguestfs: trace: upload "VolGroup00_LogVol01.img" "/dev/sda3"
libguestfs: trace: upload = 0
libguestfs: trace: shutdown
libguestfs: trace: internal_autosync
libguestfs: trace: internal_autosync = 0
libguestfs: trace: shutdown = 0
libguestfs: trace: close

Use virt-filesystems to examine the result:

$ virt-filesystems -a output.img --all --long -h
Name       Type        VFS   Label  MBR  Size  Parent
/dev/sda1  filesystem  ext3  /boot  -    102M  -
/dev/sda2  filesystem  ext3  -      -    6.9G  -
/dev/sda3  filesystem  swap  -      -    992M  -
/dev/sda1  partition   -     -      -    102M  /dev/sda
/dev/sda2  partition   -     -      -    6.9G  /dev/sda
/dev/sda3  partition   -     -      -    992M  /dev/sda
/dev/sda   device      -     -      -    8.0G  -

Here is the virt-implode script:

#!/usr/bin/perl -w

use strict;
use Sys::Guestfs;

die "$0 *.img output.img" unless @ARGV >= 2;

my $output = pop @ARGV;

# Work out how we'll partition the output image.
# Assumes that the input filesystem images are
# in raw format, and therefore size == file size.
my $fs;
my $size = 1024*1024;
my @start_sectors = ();
foreach $fs (@ARGV) {
    push @start_sectors, $size / 512;
    $size += (-s $fs);
    # Round up to next 1MB boundary.
    $size = round_up ($size, 1024*1024);
push @start_sectors, $size / 512;
$size += 1024 * 1024;

open FILE, ">$output" or die "$output: $!";
truncate FILE, $size or die "$output: truncate: $!";
close FILE or die "$output: close: $!";

my $g = Sys::Guestfs->new ();
$g->add_drive_opts ($output, format => "raw");
$g->launch ();

$g->part_init ("/dev/sda", "gpt");
my $i;
for (my $i = 0; $i < @start_sectors-1; ++$i) {
    $g->part_add ("/dev/sda", "p", $start_sectors[$i], $start_sectors[$i+1]-1);

$i = 1;
foreach $fs (@ARGV) {
    $g->upload ($fs, "/dev/sda$i");

$g->shutdown ();

sub round_up
    my $n = shift;
    my $r = shift;

    $n += $r-1;
    $n &= ~($r-1);

Next time I’ll see if I can get this guest to boot …


Leave a comment

Filed under Uncategorized

Tip: Use libguestfs on VMware ESX guests

You can use libguestfs, guestfish and the virt tools on VMware ESX guests quite easily. However it’s not obvious how to do it, so this post explains that.

You will need:

  • libguestfs tools installed on a Linux machine
  • sshfs installed on the same Linux machine
  • ssh access to the VMware ESX storage (find the root password from the administrator)
  • the name of the guest and the name of the storage volume that the guest is stored on

The guest must be shut down (more on this later).

First of all, make sure you are able to ssh as root to the VMware ESX storage. It will look something like this:

$ ssh root@vmware
root@vmware's password: ****
Last login: Wed May  4 20:47:50 2011 from [...]
[root@vmware ~]# ls -l /vmfs/
total 1
drwxr-xr-x 1 root root 512 May 10 09:22 devices
drwxr-xr-x 1 root root 512 May 10 09:22 volumes

Now you should create a temporary mount point, and mount /vmfs from the VMware ESX storage server using sshfs. The command is quite simple and you don’t need to be root on the Linux side:

$ mkdir /tmp/vmfs
$ sshfs root@vmware:/vmfs /tmp/vmfs
root@vmware's password: ****

In another window you can navigate to the guest. For example if the guest was called “test” and it lived on volume “Storage1” then:

$ cd /tmp/vmfs/volumes/Storage1/test
$ ls -l
total 1718720
-rw------- 1 root root 8589934592 May 10 09:48 test-flat.vmdk
-rw------- 1 root root       8684 May 10 09:37 test.nvram
-rw------- 1 root root        469 Apr  4 08:16 test.vmdk
-rw------- 1 root root          0 May 11  2010 test.vmsd
-rwxr-xr-x 1 root root       2666 May 10 09:37 test.vmx
-rw------- 1 root root        259 May 11  2010 test.vmxf
-rw-r--r-- 1 root root      53966 May 11  2010 vmware-1.log
-rw-r--r-- 1 root root      78771 May 11  2010 vmware-2.log
-rw-r--r-- 1 root root      56483 Apr  4 08:15 vmware-3.log
-rw-r--r-- 1 root root      56305 May 10 09:37 vmware.log

The critical file is guestname-flat.vmdk which is the flat disk image. You can just open this for read or write using guestfish, virt-df, virt-filesystems or other libguestfs tools or programs.

For example:

$ guestfish --rw -i -a test-flat.vmdk

Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

Operating system: Red Hat Enterprise Linux Server release 5.5 (Tikanga)
/dev/VolGroup00/LogVol00 mounted on /
/dev/vda1 mounted on /boot

><fs> touch /tmp/hello
><fs> ll /tmp
total 20
drwxrwxrwt.  3 root root 4096 May 10 14:48 .
drwxr-xr-x. 24 root root 4096 May 10 14:36 ..
drwxrwxrwt   2 root root 4096 Apr  4 13:16 .ICE-unix
-rw-r--r--   1 root root    0 May 10 14:48 hello

Notice that guestfish determined the guest operating system and lets you edit the disk.

$ virt-filesystems -a test-flat.vmdk --all --long -h
Name                     Type       VFS  Label Size Parent
/dev/sda1                filesystem ext3 /boot 102M -
/dev/VolGroup00/LogVol00 filesystem ext3 -     7.1G -
/dev/VolGroup00/LogVol01 filesystem swap -     768M -
/dev/VolGroup00/LogVol00 lv         -    -     7.1G /dev/VolGroup00
/dev/VolGroup00/LogVol01 lv         -    -     768M /dev/VolGroup00
/dev/VolGroup00          vg         -    -     7.9G -
/dev/sda2                pv         -    -     7.9G -
/dev/sda1                partition  -    -     102M /dev/sda
/dev/sda2                partition  -    -     7.9G /dev/sda
/dev/sda                 device     -    -     8.0G -
$ virt-df -a test-flat.vmdk -h
Filesystem                                Size       Used  Available  Use%
test-flat.vmdk:/dev/sda1                   99M        12M        81M   13%
                                          6.9G       1.1G       5.5G   16%

With libguestfs we usually allow you to read guests which are running. The results might be inconsistent at times, but it generally works. However VMware itself doesn’t allow running guests to be read. If the guest is running you can see that VMware prevents access:

# file test-flat.vmdk
test-flat.vmdk: writable, regular file, no read permission

Whereas when the same guest is shut down, reads (and writes) are allowed:

# file test-flat.vmdk
test-flat.vmdk: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 208782 sectors; partition 2: ID=0x8e, starthead 0, startsector 208845, 16563015 sectors, code offset 0x48

This is a limitation of VMware and nothing to do with libguestfs.

A note on performance: I run this from my home to a VMware server which is a third of the way around the planet over plain 2Mbps ADSL. It’s noticeably slower than accessing local disk images, but still very usable. sshfs appears to be very efficiently implemented. It is far faster and more convenient than copying the whole disk image around.


Filed under Uncategorized