$ virt-builder fedora-23 \ --root-password password:123456 \ --size 20G $ qemu-system-x86_64 -drive file=fedora-23,if=virtio \ -m 2048
Tag Archives: libguestfs
Since virt-builder 1.31.10, Cédric Bosdonnat has added a repository of OpenSUSE images.
Here’s how to install OpenSUSE Leap in about 4 minutes:
$ virt-builder -l ... opensuse-13.1 x86_64 openSUSE 13.1 opensuse-13.2 x86_64 openSUSE 13.2 opensuse-42.1 x86_64 openSUSE Leap 42.1 opensuse-tumbleweed x86_64 openSUSE Tumbleweed $ virt-builder opensuse-42.1 [ 1.2] Downloading: http://download.opensuse.org/repositories/Virtualization:/virt-builder-images/images/openSUSE-Leap-42.1.x86_64-0.0.1-Build1.6.qcow2.xz ######################################################################## 100.0% [ 133.1] Planning how to build this image [ 133.1] Uncompressing [ 142.5] Converting qcow2 to raw [ 146.4] Opening the new disk [ 177.3] Setting a random seed [ 177.3] Setting passwords virt-builder: Setting random password of root to *** [ 178.1] Finishing off Output file: opensuse-42.1.img Output size: 6.0G Output format: raw Total usable space: 5.8G Free space: 5.0G (85%) $ virt-install --import --name opensuse \ --ram 4096 \ --disk opensuse-42.1.img,format=raw \ --os-variant opensuse13.1
Red Hat provide RHEL KVM guest and cloud images. At time of writing, the last one was built in Feb 2015, and so undoubtedly contains packages which are out of date or insecure.
You can use virt-customize to update the packages in the cloud image. This requires the libguestfs subscription-manager feature which will only be available in RHEL 7.3, but see here for RHEL 7.3 preview packages. Alternatively you can use Fedora ≥ 22.
$ virt-customize \ -a rhel-guest-image-7.1-20150224.0.x86_64.qcow2 \ --sm-credentials 'USERNAME:password:PASSWORD' \ --sm-register --sm-attach auto \ --update [ 0.0] Examining the guest ... [ 17.2] Setting a random seed [ 17.2] Registering with subscription-manager [ 28.8] Attaching to compatible subscriptions [ 61.3] Updating core packages [ 976.8] Finishing off
- You should probably use
--sm-credentials USERNAME:file:FILENAMEto specify your password using a file, rather than having it exposed on the command line.
- The command above will leave the image template registered to RHN. To unregister it, add
--sm-unregisterat the end.
$ ls -gGS *.c -r--r--r--. 1 498686 Aug 4 20:01 stubs.c -rw-rw-r--. 2 203439 Sep 18 14:52 guestfs_protocol.c -rw-rw-r--. 1 51723 Jul 28 14:15 btrfs.c -rw-rw-r--. 1 36644 Jul 28 14:15 guestfsd.c -rw-rw-r--. 1 32477 Jul 28 14:15 ext2.c ... -rw-rw-r--. 1 1120 Feb 14 2015 rename.c -rw-rw-r--. 1 1073 Feb 14 2015 sleep.c -rw-rw-r--. 1 1065 Feb 14 2015 echo-daemon.c -rw-rw-r--. 1 961 Feb 14 2015 pingdaemon.c
If we take file size as a proxy for compilation time,
stubs.c is probably going to take the longest time to compile.
The current Makefile builds them in alphabetical order. Unfortunately because
stubs.c is near the end of the list, this means the final link step has to wait for
stubs.c to finish. While waiting, only a single core is being used and all the other cores are idle.
Can we organize builds to get an overall faster compile?
Simple queuing theory suggests two (obvious) possibilities: starting the builds from the shortest first to the longest last will minimize the amount of time we have to wait for any job to complete.
But what we care about is overall compile time, so starting the longest jobs first should be better, since that way the final link shouldn’t need to wait for a late-started long compile.
|Shortest (smallest) job first||10.3 s|
|Longest (largest) job first||8.5 s|
|A random order||8.5 s|
So my conclusion is that make could do better by having some way to reorder the list of input files by file size. Even randomly reordering the files could improve some compiles (although that would depend on the particular output of your PRNG on the day you ran the compile).
GNU make has no
$(sort_by_file_size) function, but we can get the same effect by using
$(shell ls -1S list of source files).
Unfortunately using GNU make functions is incompatible with automake. Grrrrrr. This is the partial solution I added to libguestfs.
An earlier version of this post had the wrong times in the table, leading to the wrong conclusion.
However in RHEL 7 many of these drivers are disabled, because they’re not stable enough to support. I was asked exactly how this works, and this post is my answer — as it’s not as simple as it sounds.
There are (at least) five separate layers involved:
|qemu code||What block drivers are compiled into qemu, and which ones are compiled out completely.|
|qemu block driver r/o whitelist||A whitelist of drivers that qemu allows you to use read-only.|
|qemu block driver r/w whitelist||A whitelist of drivers that qemu allows you to use for read and write.|
|libvirt||What libvirt enables (not covered in this discussion).|
|libguestfs||In RHEL we patch out some qemu remote storage types using a custom patch.|
Starting at the bottom of the stack, in RHEL we use
./configure --disable-* flags to disable a few features: Ceph is disabled on
9pfs is disabled everywhere. This means the qemu binary won’t even contain code for those features.
If you run
qemu-img --help in RHEL 7, you’ll see the drivers which are compiled into the binary:
$ rpm -qf /usr/bin/qemu-img qemu-img-1.5.3-92.el7.x86_64 $ qemu-img --help [...] Supported formats: vvfat vpc vmdk vhdx vdi ssh sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd iscsi gluster dmg tftp ftps ftp https http cloop bochs blkverify blkdebug
Although you can use all of those in
qemu-img, not all of those drivers work in
qemu (the hypervisor). qemu implements two whitelists. The RHEL 7
qemu-kvm.spec file looks like this:
./configure [...] --block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd \ --block-drv-ro-whitelist=vmdk,vhdx,vpc,ssh,https
--block-drv-rw-whitelist parameter configures the drivers for which full read and write access is permitted and supported in RHEL 7. It’s quite a short list!
Even shorter is the
--block-drv-ro-whitelist parameter — drivers for which only read-only access is allowed. You can’t use qemu to open these files for write. You can use these as (read-only) backing files, but you can’t commit to those backing files.
In practice what happens is you get an error if you try to use non-whitelisted block drivers:
$ /usr/libexec/qemu-kvm -drive file=test.vpc qemu-kvm: -drive file=test.vpc: could not open disk image test.vpc: Driver 'vpc' can only be used for read-only devices $ /usr/libexec/qemu-kvm -drive file=test.qcow1 qemu-kvm: -drive file=test.qcow1: could not open disk image test.qcow1: Driver 'qcow' is not whitelisted
Note that’s a qcow v1 (ancient format) file, not modern qcow2.
Side note: Only
qemu (the hypervisor) enforces the whitelist. Tools like
qemu-img ignore it.
At the top of the stack, libguestfs has a patch which removes support for many remote protocols. Currently (RHEL 7.2/7.3) we disable: ftp, ftps, http, https, tftp, gluster, iscsi, sheepdog, ssh. That leaves only: local file, rbd (Ceph) and NBD enabled.
virt-v2v uses a mixture of libguestfs and
qemu-img to convert VMware and Xen guests to run on KVM. To access VMware we need to use
https and to access Xen we use
ssh. Both of those drivers are disabled in libguestfs, and only available read-only in the qemu whitelist. However that’s sufficient for virt-v2v, since all it does is add the https or ssh driver as a read-only backing file. (If you are interested in finding out more about how virt-v2v works, then I gave a talk about it at the KVM Forum which is available online).
In summary — it’s complicated.
One of the tools I maintain is virt-v2v. It’s a program to import guests from foreign hypervisors like VMware and Xen, to KVM. It only does conversions to KVM, not the other way. And a feature I intentionally removed in RHEL 7 was importing KVM → KVM.
Why would you want to “import” KVM → KVM? Well, no reason actually. In fact it’s one of those really bad ideas for V2V. However it used to have a useful purpose: oVirt/RHEV can’t import a plain disk image, but virt-v2v knows how to import things to oVirt, so people used virt-v2v as backdoor for this missing feature.
Removing this virt-v2v feature has caused a lot of moaning, but I’m adamant it’s a very bad idea to use virt-v2v as a way to import disk images. Virt-v2v does all sorts of complex filesystem and Windows Registry manipulations, which you don’t want and don’t need if your guest already runs on KVM. Worst case, you could even end up breaking your guest.
However I have now written a replacement script that does the job: http://git.annexia.org/?p=import-to-ovirt.git
If your guest is a disk image that already runs on KVM, then you can use this script to import the guest. You’ll need to clone the git repo, read the README file, and then read the tool’s man page. It’s pretty straightforward.
There are a few shortcomings with this script to be aware of:
- The guest must have virtio drivers installed already, and must be able to boot off virtio-blk (default) or virtio-scsi. For virtio-scsi, you’ll need to flip the checkbox in the ‘Advanced’ section of the guest parameters in the oVirt UI.
- It should be possible to import guests that don’t have virtio drivers installed, but can use IDE. This is a missing feature (patches welcome).
- No network card is added to the guest, so it probably won’t have network when it boots. It should be possible to add a network card through the UI, but really this is something that needs to be fixed in the script (patches welcome).
- It doesn’t handle all the random packaging formats that guests come in, like OVA. You’ll have to extract these first and import just the disk image.
- It’s not in any way supported or endorsed by Red Hat.