Thanks to infernix who contributed this tip on how to use libguestfs to access Ceph (and in theory, sheepdog, gluster, iscsi and more) devices.
If you apply this small patch to libguestfs you can use these distributed filesystems straight away by doing:
$ guestfish ><fs> set-attach-method appliance ><fs> add-drive /dev/null ><fs> config -set drive.hd0.file=rbd:pool/volume ><fs> run
… followed by usual guestfish commands.
This is a temporary hack, until we properly model Ceph (etc) through the libguestfs stable API. Nevertheless it works as follows:
add-drive /dev/nulladds a drive, known to libguestfs.
- Implicitly this means that libguestfs adds a
-driveoption when it runs qemu.
- The custom qemu
-set drive.hd0.file=...parameter modifies the preceding
-driveoption added by libguestfs so that the file is changed from
/dev/nullto whatever you want. In this case, to a Ceph
3 responses to “Accessing Ceph (rbd), sheepdog, etc using libguestfs”
A very interesting and helpful article. I was able to retrieve file from rbd image using the above method.
Would you please let know how to do this in a single command line. I am getting the following error
guestfish add /dev/null : config -set drive.hd0.file=rbd:ssd-clonetest-rule5/fullclone.img : run : list-partitions
guestfish: invalid option — ‘s’
Try `guestfish –help’ for more information.
anything I m doing wrong here.
I am using CentOS 6.4 2.6.32-358.6.2.el6.x86_64
Please post questions on the libguestfs mailing list:
The answer to this is complex …
Thanks.. I’ll post it there. 🙂