Tag Archives: ceph

Use guestfish, virt tools with remote disks

New in libguestfs ≥ 1.21.30 is the ability to use guestfish and some of the virt tools with remote disks.

Currently you can use remote disks over NBD, GlusterFS, Ceph, Sheepdog and (recently upstream) SSH.

For this example I’ll use SSH because it needs no setup, although this requires absolutely the latest qemu and libguestfs (both from git).

Since we don’t have libvirt support for ssh yet, so this only works with the direct backend:

$ export LIBGUESTFS_BACKEND=direct

I can use a ssh:// URI to add disks with guestfish, guestmount and most of the virt tools. For example:

$ virt-rescue -a ssh://localhost/tmp/f17x64.img
[... lots of boot messages ...]
Welcome to virt-rescue, the libguestfs rescue shell.

Note: The contents of / are the rescue appliance.
You have to mount the guest's partitions under /sysroot
before you can examine them.

><rescue> mount /dev/vg_f17x64/lv_root /sysroot
><rescue> cat /sysroot/etc/redhat-release
Fedora release 17 (Beefy Miracle)

Apart from being a tiny bit slower, it just works as if the disk was local:

$ virt-df -a ssh://localhost/tmp/f17x64.img
Filesystem                           1K-blocks       Used  Available  Use%
f17x64.img:/dev/sda1                    487652      63738     398314   14%
f17x64.img:/dev/vg_f17x64/lv_root     28316680    4285576   22586036   16%
$ guestmount -a ssh://localhost/tmp/f17x64.img -i /tmp/mnt
$ ls /tmp/mnt
bin   dev  home  lib64       media  opt   root  sbin  sys  usr
boot  etc  lib   lost+found  mnt    proc  run   srv   tmp  var
$ cat /tmp/mnt/etc/redhat-release
Fedora release 17 (Beefy Miracle)
$ guestunmount /tmp/mnt
Advertisements

Leave a comment

Filed under Uncategorized

Accessing NBD disks using libguestfs

It’s always been possible, but clumsy, to access Network Block Device (NBD) disks from libguestfs, but starting in libguestfs 1.22 we hope to make this (and Gluster, Ceph and Sheepdog access) much simpler.

The first change is upstream in libguestfs 1.21.21. You can add an NBD disk directly.

To show this using guestfish, I’ll start an NBD server. This could be started on another machine, but to make things simple I’ll start this server on the same machine:

$ qemu-nbd f18x64.img -t

f18x64.img is the disk image that I want to export. The -t option makes the qemu-nbd server persistent (ie. it doesn’t just exit after serving the first client).

Now we can connect to this server using guestfish as follows:

$ guestfish

Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

><fs> add-drive "" format:raw protocol:nbd server:localhost
><fs> run

The empty "" (quotes) are the export name. Since qemu-nbd doesn’t support export names, we can leave this empty. The main change is to specify the protocol (nbd) and the server that libguestfs should connect to (localhost, but a remote host would also work). I haven’t specified a port number here because both the client and server are using the standard NBD port (10809), but you could use server:localhost:NNN to use a different port number if needed.

Ordinary guestfish commands just work:

><fs> list-filesystems
/dev/sda1: ext4
/dev/fedora/root: ext4
/dev/fedora/swap: swap
><fs> inspect-os
/dev/fedora/root
><fs> inspect-get-product-name /dev/fedora/root
Fedora release 18 (Spherical Cow)

The next steps are to:

  1. Add support for more technologies. At least: Gluster, Ceph, Sheepdog and iSCSI, since those are all supported by qemu so we can easily leverage that support in libguestfs.
  2. Change the guestfish -a option to make adding remote drives possible from the command line.

The obvious [but not yet implemented] way to change the -a option is to allow a URI to be specified. For example:

$ guestfish -a nbd://localhost/exportname

where the elements of the URI like protocol, transport, server, port number and export name translate naturally into parameters of the add-drive API.

Leave a comment

Filed under Uncategorized

Accessing Ceph (rbd), sheepdog, etc using libguestfs

Thanks to infernix who contributed this tip on how to use libguestfs to access Ceph (and in theory, sheepdog, gluster, iscsi and more) devices.

If you apply this small patch to libguestfs you can use these distributed filesystems straight away by doing:

$ guestfish
><fs> set-attach-method appliance
><fs> add-drive /dev/null
><fs> config -set drive.hd0.file=rbd:pool/volume
><fs> run

… followed by usual guestfish commands.

This is a temporary hack, until we properly model Ceph (etc) through the libguestfs stable API. Nevertheless it works as follows:

  1. The add-drive /dev/null adds a drive, known to libguestfs.
  2. Implicitly this means that libguestfs adds a -drive option when it runs qemu.
  3. The custom qemu -set drive.hd0.file=... parameter modifies the preceding -drive option added by libguestfs so that the file is changed from /dev/null to whatever you want. In this case, to a Ceph rbd:... path.

3 Comments

Filed under Uncategorized