Tag Archives: virt-v2v

virt-v2v, libguestfs and qemu remote drivers in RHEL 7

Upstream qemu can access a variety of remote disks, like NBD and Ceph. This feature is exposed in libguestfs so you can easily mount remote storage.

However in RHEL 7 many of these drivers are disabled, because they’re not stable enough to support. I was asked exactly how this works, and this post is my answer — as it’s not as simple as it sounds.

There are (at least) five separate layers involved:

qemu code What block drivers are compiled into qemu, and which ones are compiled out completely.
qemu block driver r/o whitelist A whitelist of drivers that qemu allows you to use read-only.
qemu block driver r/w whitelist A whitelist of drivers that qemu allows you to use for read and write.
libvirt What libvirt enables (not covered in this discussion).
libguestfs In RHEL we patch out some qemu remote storage types using a custom patch.

Starting at the bottom of the stack, in RHEL we use ./configure --disable-* flags to disable a few features: Ceph is disabled on !x86_64 and 9pfs is disabled everywhere. This means the qemu binary won’t even contain code for those features.

If you run qemu-img --help in RHEL 7, you’ll see the drivers which are compiled into the binary:

$ rpm -qf /usr/bin/qemu-img
$ qemu-img --help
Supported formats: vvfat vpc vmdk vhdx vdi ssh
sheepdog rbd raw host_cdrom host_floppy host_device
file qed qcow2 qcow parallels nbd iscsi gluster dmg
tftp ftps ftp https http cloop bochs blkverify blkdebug

Although you can use all of those in qemu-img, not all of those drivers work in qemu (the hypervisor). qemu implements two whitelists. The RHEL 7 qemu-kvm.spec file looks like this:

./configure [...]
    --block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd \

The --block-drv-rw-whitelist parameter configures the drivers for which full read and write access is permitted and supported in RHEL 7. It’s quite a short list!

Even shorter is the --block-drv-ro-whitelist parameter — drivers for which only read-only access is allowed. You can’t use qemu to open these files for write. You can use these as (read-only) backing files, but you can’t commit to those backing files.

In practice what happens is you get an error if you try to use non-whitelisted block drivers:

$ /usr/libexec/qemu-kvm -drive file=test.vpc
qemu-kvm: -drive file=test.vpc: could not open disk image
test.vpc: Driver 'vpc' can only be used for read-only devices
$ /usr/libexec/qemu-kvm -drive file=test.qcow1
qemu-kvm: -drive file=test.qcow1: could not open disk
image test.qcow1: Driver 'qcow' is not whitelisted

Note that’s a qcow v1 (ancient format) file, not modern qcow2.

Side note: Only qemu (the hypervisor) enforces the whitelist. Tools like qemu-img ignore it.

At the top of the stack, libguestfs has a patch which removes support for many remote protocols. Currently (RHEL 7.2/7.3) we disable: ftp, ftps, http, https, tftp, gluster, iscsi, sheepdog, ssh. That leaves only: local file, rbd (Ceph) and NBD enabled.

virt-v2v uses a mixture of libguestfs and qemu-img to convert VMware and Xen guests to run on KVM. To access VMware we need to use https and to access Xen we use ssh. Both of those drivers are disabled in libguestfs, and only available read-only in the qemu whitelist. However that’s sufficient for virt-v2v, since all it does is add the https or ssh driver as a read-only backing file. (If you are interested in finding out more about how virt-v2v works, then I gave a talk about it at the KVM Forum which is available online).

In summary — it’s complicated.


Filed under Uncategorized

Importing KVM guests to oVirt or RHEV

One of the tools I maintain is virt-v2v. It’s a program to import guests from foreign hypervisors like VMware and Xen, to KVM. It only does conversions to KVM, not the other way. And a feature I intentionally removed in RHEL 7 was importing KVM → KVM.

Why would you want to “import” KVM → KVM? Well, no reason actually. In fact it’s one of those really bad ideas for V2V. However it used to have a useful purpose: oVirt/RHEV can’t import a plain disk image, but virt-v2v knows how to import things to oVirt, so people used virt-v2v as backdoor for this missing feature.

Removing this virt-v2v feature has caused a lot of moaning, but I’m adamant it’s a very bad idea to use virt-v2v as a way to import disk images. Virt-v2v does all sorts of complex filesystem and Windows Registry manipulations, which you don’t want and don’t need if your guest already runs on KVM. Worst case, you could even end up breaking your guest.

However I have now written a replacement script that does the job: http://git.annexia.org/?p=import-to-ovirt.git

If your guest is a disk image that already runs on KVM, then you can use this script to import the guest. You’ll need to clone the git repo, read the README file, and then read the tool’s man page. It’s pretty straightforward.

There are a few shortcomings with this script to be aware of:

  1. The guest must have virtio drivers installed already, and must be able to boot off virtio-blk (default) or virtio-scsi. For virtio-scsi, you’ll need to flip the checkbox in the ‘Advanced’ section of the guest parameters in the oVirt UI.
  2. It should be possible to import guests that don’t have virtio drivers installed, but can use IDE. This is a missing feature (patches welcome).
  3. No network card is added to the guest, so it probably won’t have network when it boots. It should be possible to add a network card through the UI, but really this is something that needs to be fixed in the script (patches welcome).
  4. It doesn’t handle all the random packaging formats that guests come in, like OVA. You’ll have to extract these first and import just the disk image.
  5. It’s not in any way supported or endorsed by Red Hat.


Filed under Uncategorized

How to rebuild libguestfs from source on RHEL or CentOS 7

Three people have asked me about this, so here goes. You will need a RHEL or CentOS 7.1 machine (perhaps a VM), and you may need to grab extra packages from this preview repository. The preview repo will go away when we release 7.2, but then again 7.2 should contain all the packages you need.

You’ll need to install rpm-build. You could also install mock (from EPEL), but in fact you don’t need mock to build libguestfs and it may be easier and faster without.

Please don’t build libguestfs as root. It’s not necessary to build (any) packages as root, and can even be dangerous.

Grab the source RPM. The latest at time of writing is libguestfs-1.28.1-1.55.el7.src.rpm. When 7.2 comes out, you’ll be able to get the source RPM using this command:

yumdownloader --source libguestfs

I find it helpful to build RPMs in my home directory, and also to disable the libguestfs tests. To do that, I have a ~/.rpmmacros file that contains:

%_topdir	%(echo $HOME)/rpmbuild
%_smp_mflags	-j5
%libguestfs_runtests   0

You may wish to adjust %_smp_mflags. A good value to choose is 1 + the number of cores on your machine.

I’ll assume at this point that the reason you want to rebuild libguestfs is to apply a patch (otherwise why aren’t you using the binaries we supply?), so first let’s unpack the source tree. Note I am running this command as non-root:

rpm -i libguestfs-1.28.1-1.55.el7.src.rpm

If you set up ~/.rpmmacros as above then the sources should be unpacked under ~/rpmbuild/SPECS and ~/rpmbuild/SOURCES.

Take a look at least at the libguestfs.spec file. You may wish to modify it now to add any patches you need (add the patch files to the SOURCES/ subdirectory). You might also want to modify the Release: tag so that your package doesn’t conflict with the official package.

You might also need to install build dependencies. This command should be run as root since it needs to install packages, and also note that you may need packages from the repo linked above.

yum-builddep libguestfs.spec

Now you can rebuild libguestfs (non-root!):

rpmbuild -ba libguestfs.spec

With the tests disabled, on decent hardware, that should take about 10 minutes.

The final binary packages will end up in ~/rpmbuild/RPMS/ and can be installed as normal:

yum localupdate x86_64/*.rpm noarch/*.rpm

You might see errors during the build phase. If they aren’t fatal, you can ignore them, but if the build fails then post the complete log to our mailing list (you don’t need to subscribe) so we can help you out.


Filed under Uncategorized

My KVM Forum 2015 talk: New qemu technology used in virt-v2v

All KVM Forum talks can be found here.

1 Comment

Filed under Uncategorized

KVM Forum 2015

Assuming HMG can get my passport back to me in time, I am speaking at the KVM Forum 2015 in Seattle USA (full schedule of talks here).

I’m going to be talking about virt-v2v and new features of qemu/KVM that made it possible for virt-v2v to be faster and more reliable than ever.

Leave a comment

Filed under Uncategorized

New in virt-v2v

Leave a comment

Filed under Uncategorized

Video: virt-v2v integration with RHEV-M

This video shows using the GUI to import a virtual machine from VMware to RHEV-M. It performs the conversion using virt-v2v, which is responsible for installing virtio drivers, fixing the bootloader, and so forth.

Thanks Arik Hadas. Now I just have to fix the epic RHEL 7.2 bug list — 57 bugs at last count :-(


Filed under Uncategorized