Tag Archives: libguestfs

Tip: Updating RHEL 7.1 cloud images using virt-customize and subscription-manager

Red Hat provide RHEL KVM guest and cloud images. At time of writing, the last one was built in Feb 2015, and so undoubtedly contains packages which are out of date or insecure.

You can use virt-customize to update the packages in the cloud image. This requires the libguestfs subscription-manager feature which will only be available in RHEL 7.3, but see here for RHEL 7.3 preview packages. Alternatively you can use Fedora ≥ 22.

$ virt-customize \
  -a rhel-guest-image-7.1-20150224.0.x86_64.qcow2 \
  --sm-credentials 'USERNAME:password:PASSWORD' \
  --sm-register --sm-attach auto \
[   0.0] Examining the guest ...
[  17.2] Setting a random seed
[  17.2] Registering with subscription-manager
[  28.8] Attaching to compatible subscriptions
[  61.3] Updating core packages
[ 976.8] Finishing off
  1. You should probably use --sm-credentials USERNAME:file:FILENAME to specify your password using a file, rather than having it exposed on the command line.
  2. The command above will leave the image template registered to RHN. To unregister it, add --sm-unregister at the end.


Filed under Uncategorized

Libguestfs running on the LG G4


Previously …

The massive Bluetooth keyboard is a Logitech K480 and rather nice.

Leave a comment

October 3, 2015 · 11:53 am

“make” and queuing theory

One directory in the libguestfs sources has 101 source files with a wide range of sizes:

$ ls -gGS *.c
-r--r--r--. 1 498686 Aug  4 20:01 stubs.c
-rw-rw-r--. 2 203439 Sep 18 14:52 guestfs_protocol.c
-rw-rw-r--. 1  51723 Jul 28 14:15 btrfs.c
-rw-rw-r--. 1  36644 Jul 28 14:15 guestfsd.c
-rw-rw-r--. 1  32477 Jul 28 14:15 ext2.c
-rw-rw-r--. 1   1120 Feb 14  2015 rename.c
-rw-rw-r--. 1   1073 Feb 14  2015 sleep.c
-rw-rw-r--. 1   1065 Feb 14  2015 echo-daemon.c
-rw-rw-r--. 1    961 Feb 14  2015 pingdaemon.c

If we take file size as a proxy for compilation time, stubs.c is probably going to take the longest time to compile.

The current Makefile builds them in alphabetical order. Unfortunately because stubs.c is near the end of the list, this means the final link step has to wait for stubs.c to finish. While waiting, only a single core is being used and all the other cores are idle.

Can we organize builds to get an overall faster compile?

Simple queuing theory suggests two (obvious) possibilities: starting the builds from the shortest first to the longest last will minimize the amount of time we have to wait for any job to complete.

But what we care about is overall compile time, so starting the longest jobs first should be better, since that way the final link shouldn’t need to wait for a late-started long compile.

Alphabetical 10.5 s
Shortest (smallest) job first 10.3 s
Longest (largest) job first 8.5 s
A random order 8.5 s

So my conclusion is that make could do better by having some way to reorder the list of input files by file size. Even randomly reordering the files could improve some compiles (although that would depend on the particular output of your PRNG on the day you ran the compile).

GNU make has no $(sort_by_file_size) function, but we can get the same effect by using $(shell ls -1S list of source files).

Unfortunately using GNU make functions is incompatible with automake. Grrrrrr. This is the partial solution I added to libguestfs.

An earlier version of this post had the wrong times in the table, leading to the wrong conclusion.


Filed under Uncategorized

virt-v2v, libguestfs and qemu remote drivers in RHEL 7

Upstream qemu can access a variety of remote disks, like NBD and Ceph. This feature is exposed in libguestfs so you can easily mount remote storage.

However in RHEL 7 many of these drivers are disabled, because they’re not stable enough to support. I was asked exactly how this works, and this post is my answer — as it’s not as simple as it sounds.

There are (at least) five separate layers involved:

qemu code What block drivers are compiled into qemu, and which ones are compiled out completely.
qemu block driver r/o whitelist A whitelist of drivers that qemu allows you to use read-only.
qemu block driver r/w whitelist A whitelist of drivers that qemu allows you to use for read and write.
libvirt What libvirt enables (not covered in this discussion).
libguestfs In RHEL we patch out some qemu remote storage types using a custom patch.

Starting at the bottom of the stack, in RHEL we use ./configure --disable-* flags to disable a few features: Ceph is disabled on !x86_64 and 9pfs is disabled everywhere. This means the qemu binary won’t even contain code for those features.

If you run qemu-img --help in RHEL 7, you’ll see the drivers which are compiled into the binary:

$ rpm -qf /usr/bin/qemu-img
$ qemu-img --help
Supported formats: vvfat vpc vmdk vhdx vdi ssh
sheepdog rbd raw host_cdrom host_floppy host_device
file qed qcow2 qcow parallels nbd iscsi gluster dmg
tftp ftps ftp https http cloop bochs blkverify blkdebug

Although you can use all of those in qemu-img, not all of those drivers work in qemu (the hypervisor). qemu implements two whitelists. The RHEL 7 qemu-kvm.spec file looks like this:

./configure [...]
    --block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd \

The --block-drv-rw-whitelist parameter configures the drivers for which full read and write access is permitted and supported in RHEL 7. It’s quite a short list!

Even shorter is the --block-drv-ro-whitelist parameter — drivers for which only read-only access is allowed. You can’t use qemu to open these files for write. You can use these as (read-only) backing files, but you can’t commit to those backing files.

In practice what happens is you get an error if you try to use non-whitelisted block drivers:

$ /usr/libexec/qemu-kvm -drive file=test.vpc
qemu-kvm: -drive file=test.vpc: could not open disk image
test.vpc: Driver 'vpc' can only be used for read-only devices
$ /usr/libexec/qemu-kvm -drive file=test.qcow1
qemu-kvm: -drive file=test.qcow1: could not open disk
image test.qcow1: Driver 'qcow' is not whitelisted

Note that’s a qcow v1 (ancient format) file, not modern qcow2.

Side note: Only qemu (the hypervisor) enforces the whitelist. Tools like qemu-img ignore it.

At the top of the stack, libguestfs has a patch which removes support for many remote protocols. Currently (RHEL 7.2/7.3) we disable: ftp, ftps, http, https, tftp, gluster, iscsi, sheepdog, ssh. That leaves only: local file, rbd (Ceph) and NBD enabled.

virt-v2v uses a mixture of libguestfs and qemu-img to convert VMware and Xen guests to run on KVM. To access VMware we need to use https and to access Xen we use ssh. Both of those drivers are disabled in libguestfs, and only available read-only in the qemu whitelist. However that’s sufficient for virt-v2v, since all it does is add the https or ssh driver as a read-only backing file. (If you are interested in finding out more about how virt-v2v works, then I gave a talk about it at the KVM Forum which is available online).

In summary — it’s complicated.


Filed under Uncategorized

Importing KVM guests to oVirt or RHEV

One of the tools I maintain is virt-v2v. It’s a program to import guests from foreign hypervisors like VMware and Xen, to KVM. It only does conversions to KVM, not the other way. And a feature I intentionally removed in RHEL 7 was importing KVM → KVM.

Why would you want to “import” KVM → KVM? Well, no reason actually. In fact it’s one of those really bad ideas for V2V. However it used to have a useful purpose: oVirt/RHEV can’t import a plain disk image, but virt-v2v knows how to import things to oVirt, so people used virt-v2v as backdoor for this missing feature.

Removing this virt-v2v feature has caused a lot of moaning, but I’m adamant it’s a very bad idea to use virt-v2v as a way to import disk images. Virt-v2v does all sorts of complex filesystem and Windows Registry manipulations, which you don’t want and don’t need if your guest already runs on KVM. Worst case, you could even end up breaking your guest.

However I have now written a replacement script that does the job: http://git.annexia.org/?p=import-to-ovirt.git

If your guest is a disk image that already runs on KVM, then you can use this script to import the guest. You’ll need to clone the git repo, read the README file, and then read the tool’s man page. It’s pretty straightforward.

There are a few shortcomings with this script to be aware of:

  1. The guest must have virtio drivers installed already, and must be able to boot off virtio-blk (default) or virtio-scsi. For virtio-scsi, you’ll need to flip the checkbox in the ‘Advanced’ section of the guest parameters in the oVirt UI.
  2. It should be possible to import guests that don’t have virtio drivers installed, but can use IDE. This is a missing feature (patches welcome).
  3. No network card is added to the guest, so it probably won’t have network when it boots. It should be possible to add a network card through the UI, but really this is something that needs to be fixed in the script (patches welcome).
  4. It doesn’t handle all the random packaging formats that guests come in, like OVA. You’ll have to extract these first and import just the disk image.
  5. It’s not in any way supported or endorsed by Red Hat.


Filed under Uncategorized

How to rebuild libguestfs from source on RHEL or CentOS 7

Three people have asked me about this, so here goes. You will need a RHEL or CentOS 7.1 machine (perhaps a VM), and you may need to grab extra packages from this preview repository. The preview repo will go away when we release 7.2, but then again 7.2 should contain all the packages you need.

You’ll need to install rpm-build. You could also install mock (from EPEL), but in fact you don’t need mock to build libguestfs and it may be easier and faster without.

Please don’t build libguestfs as root. It’s not necessary to build (any) packages as root, and can even be dangerous.

Grab the source RPM. The latest at time of writing is libguestfs-1.28.1-1.55.el7.src.rpm. When 7.2 comes out, you’ll be able to get the source RPM using this command:

yumdownloader --source libguestfs

I find it helpful to build RPMs in my home directory, and also to disable the libguestfs tests. To do that, I have a ~/.rpmmacros file that contains:

%_topdir	%(echo $HOME)/rpmbuild
%_smp_mflags	-j5
%libguestfs_runtests   0

You may wish to adjust %_smp_mflags. A good value to choose is 1 + the number of cores on your machine.

I’ll assume at this point that the reason you want to rebuild libguestfs is to apply a patch (otherwise why aren’t you using the binaries we supply?), so first let’s unpack the source tree. Note I am running this command as non-root:

rpm -i libguestfs-1.28.1-1.55.el7.src.rpm

If you set up ~/.rpmmacros as above then the sources should be unpacked under ~/rpmbuild/SPECS and ~/rpmbuild/SOURCES.

Take a look at least at the libguestfs.spec file. You may wish to modify it now to add any patches you need (add the patch files to the SOURCES/ subdirectory). You might also want to modify the Release: tag so that your package doesn’t conflict with the official package.

You might also need to install build dependencies. This command should be run as root since it needs to install packages, and also note that you may need packages from the repo linked above.

yum-builddep libguestfs.spec

Now you can rebuild libguestfs (non-root!):

rpmbuild -ba libguestfs.spec

With the tests disabled, on decent hardware, that should take about 10 minutes.

The final binary packages will end up in ~/rpmbuild/RPMS/ and can be installed as normal:

yum localupdate x86_64/*.rpm noarch/*.rpm

You might see errors during the build phase. If they aren’t fatal, you can ignore them, but if the build fails then post the complete log to our mailing list (you don’t need to subscribe) so we can help you out.


Filed under Uncategorized

My KVM Forum 2015 talk: New qemu technology used in virt-v2v

All KVM Forum talks can be found here.

1 Comment

Filed under Uncategorized