$ virt-builder fedora-23 \ --root-password password:123456 \ --size 20G $ qemu-system-x86_64 -drive file=fedora-23,if=virtio \ -m 2048
Tag Archives: virtualization
However in RHEL 7 many of these drivers are disabled, because they’re not stable enough to support. I was asked exactly how this works, and this post is my answer — as it’s not as simple as it sounds.
There are (at least) five separate layers involved:
|qemu code||What block drivers are compiled into qemu, and which ones are compiled out completely.|
|qemu block driver r/o whitelist||A whitelist of drivers that qemu allows you to use read-only.|
|qemu block driver r/w whitelist||A whitelist of drivers that qemu allows you to use for read and write.|
|libvirt||What libvirt enables (not covered in this discussion).|
|libguestfs||In RHEL we patch out some qemu remote storage types using a custom patch.|
Starting at the bottom of the stack, in RHEL we use
./configure --disable-* flags to disable a few features: Ceph is disabled on
9pfs is disabled everywhere. This means the qemu binary won’t even contain code for those features.
If you run
qemu-img --help in RHEL 7, you’ll see the drivers which are compiled into the binary:
$ rpm -qf /usr/bin/qemu-img qemu-img-1.5.3-92.el7.x86_64 $ qemu-img --help [...] Supported formats: vvfat vpc vmdk vhdx vdi ssh sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd iscsi gluster dmg tftp ftps ftp https http cloop bochs blkverify blkdebug
Although you can use all of those in
qemu-img, not all of those drivers work in
qemu (the hypervisor). qemu implements two whitelists. The RHEL 7
qemu-kvm.spec file looks like this:
./configure [...] --block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd \ --block-drv-ro-whitelist=vmdk,vhdx,vpc,ssh,https
--block-drv-rw-whitelist parameter configures the drivers for which full read and write access is permitted and supported in RHEL 7. It’s quite a short list!
Even shorter is the
--block-drv-ro-whitelist parameter — drivers for which only read-only access is allowed. You can’t use qemu to open these files for write. You can use these as (read-only) backing files, but you can’t commit to those backing files.
In practice what happens is you get an error if you try to use non-whitelisted block drivers:
$ /usr/libexec/qemu-kvm -drive file=test.vpc qemu-kvm: -drive file=test.vpc: could not open disk image test.vpc: Driver 'vpc' can only be used for read-only devices $ /usr/libexec/qemu-kvm -drive file=test.qcow1 qemu-kvm: -drive file=test.qcow1: could not open disk image test.qcow1: Driver 'qcow' is not whitelisted
Note that’s a qcow v1 (ancient format) file, not modern qcow2.
Side note: Only
qemu (the hypervisor) enforces the whitelist. Tools like
qemu-img ignore it.
At the top of the stack, libguestfs has a patch which removes support for many remote protocols. Currently (RHEL 7.2/7.3) we disable: ftp, ftps, http, https, tftp, gluster, iscsi, sheepdog, ssh. That leaves only: local file, rbd (Ceph) and NBD enabled.
virt-v2v uses a mixture of libguestfs and
qemu-img to convert VMware and Xen guests to run on KVM. To access VMware we need to use
https and to access Xen we use
ssh. Both of those drivers are disabled in libguestfs, and only available read-only in the qemu whitelist. However that’s sufficient for virt-v2v, since all it does is add the https or ssh driver as a read-only backing file. (If you are interested in finding out more about how virt-v2v works, then I gave a talk about it at the KVM Forum which is available online).
In summary — it’s complicated.
One of the tools I maintain is virt-v2v. It’s a program to import guests from foreign hypervisors like VMware and Xen, to KVM. It only does conversions to KVM, not the other way. And a feature I intentionally removed in RHEL 7 was importing KVM → KVM.
Why would you want to “import” KVM → KVM? Well, no reason actually. In fact it’s one of those really bad ideas for V2V. However it used to have a useful purpose: oVirt/RHEV can’t import a plain disk image, but virt-v2v knows how to import things to oVirt, so people used virt-v2v as backdoor for this missing feature.
Removing this virt-v2v feature has caused a lot of moaning, but I’m adamant it’s a very bad idea to use virt-v2v as a way to import disk images. Virt-v2v does all sorts of complex filesystem and Windows Registry manipulations, which you don’t want and don’t need if your guest already runs on KVM. Worst case, you could even end up breaking your guest.
However I have now written a replacement script that does the job: http://git.annexia.org/?p=import-to-ovirt.git
If your guest is a disk image that already runs on KVM, then you can use this script to import the guest. You’ll need to clone the git repo, read the README file, and then read the tool’s man page. It’s pretty straightforward.
There are a few shortcomings with this script to be aware of:
- The guest must have virtio drivers installed already, and must be able to boot off virtio-blk (default) or virtio-scsi. For virtio-scsi, you’ll need to flip the checkbox in the ‘Advanced’ section of the guest parameters in the oVirt UI.
- It should be possible to import guests that don’t have virtio drivers installed, but can use IDE. This is a missing feature (patches welcome).
- No network card is added to the guest, so it probably won’t have network when it boots. It should be possible to add a network card through the UI, but really this is something that needs to be fixed in the script (patches welcome).
- It doesn’t handle all the random packaging formats that guests come in, like OVA. You’ll have to extract these first and import just the disk image.
- It’s not in any way supported or endorsed by Red Hat.
Three people have asked me about this, so here goes. You will need a RHEL or CentOS 7.1 machine (perhaps a VM), and you may need to grab extra packages from this preview repository. The preview repo will go away when we release 7.2, but then again 7.2 should contain all the packages you need.
You’ll need to install
rpm-build. You could also install
mock (from EPEL), but in fact you don’t need mock to build libguestfs and it may be easier and faster without.
Please don’t build libguestfs as root. It’s not necessary to build (any) packages as root, and can even be dangerous.
Grab the source RPM. The latest at time of writing is libguestfs-1.28.1-1.55.el7.src.rpm. When 7.2 comes out, you’ll be able to get the source RPM using this command:
yumdownloader --source libguestfs
I find it helpful to build RPMs in my home directory, and also to disable the libguestfs tests. To do that, I have a
~/.rpmmacros file that contains:
%_topdir %(echo $HOME)/rpmbuild %_smp_mflags -j5 %libguestfs_runtests 0
You may wish to adjust
%_smp_mflags. A good value to choose is 1 + the number of cores on your machine.
I’ll assume at this point that the reason you want to rebuild libguestfs is to apply a patch (otherwise why aren’t you using the binaries we supply?), so first let’s unpack the source tree. Note I am running this command as non-root:
rpm -i libguestfs-1.28.1-1.55.el7.src.rpm
If you set up
~/.rpmmacros as above then the sources should be unpacked under
Take a look at least at the
libguestfs.spec file. You may wish to modify it now to add any patches you need (add the patch files to the
SOURCES/ subdirectory). You might also want to modify the
Release: tag so that your package doesn’t conflict with the official package.
You might also need to install build dependencies. This command should be run as root since it needs to install packages, and also note that you may need packages from the repo linked above.
Now you can rebuild libguestfs (non-root!):
rpmbuild -ba libguestfs.spec
With the tests disabled, on decent hardware, that should take about 10 minutes.
The final binary packages will end up in
~/rpmbuild/RPMS/ and can be installed as normal:
yum localupdate x86_64/*.rpm noarch/*.rpm
You might see errors during the build phase. If they aren’t fatal, you can ignore them, but if the build fails then post the complete log to our mailing list (you don’t need to subscribe) so we can help you out.
Slowly, of course.
$ cat localenv export SUPERMIN=/home/rjones/d/supermin-mipsel/src/supermin export LIBGUESTFS_HV=/home/rjones/d/qemu-mipsel/mipsel-softmmu/qemu-system-mipsel export SUPERMIN_KERNEL=/home/rjones/d/libguestfs-mipsel/kernel/boot/vmlinux-3.16.0-0.bpo.4-4kc-malta export SUPERMIN_KERNEL_VERSION=3.16.0-0.bpo.4-4kc-malta export SUPERMIN_MODULES=/home/rjones/d/libguestfs-mipsel/kernel/lib/modules/3.16.0-0.bpo.4-4kc-malta/
Debian 8 was released a couple of days ago, and you can now install it through virt-builder.
--notes to read the release notes:
$ virt-builder debian-8 --notes
To build an image:
$ virt-builder debian-8 \ --firstboot-command "dpkg-reconfigure openssh-server"
To boot it under libvirt:
$ virt-install --import \ --name debian-8 --ram 2048 \ --disk path=debian-8.img,format=raw --os-variant=debianwheezy
(At some point
--os-variant=debianjessie will work, but virt-install doesn’t support it yet)
Update: This is how I ended up running Debian 8:
$ virt-builder debian-8 \ --size=30G \ --root-password PASSWORD \ --edit '/etc/apt/sources.list: s/wheezy/jessie/g' \ --run-command ' apt-get -y install debian-keyring debian-archive-keyring apt-key update ' \ --install emacs,nfs-common,sudo \ --edit '/etc/ssh/sshd_config: s/^#PermitEmptyPasswords no/PermitEmptyPasswords yes/' \ --firstboot FIRSTBOOT.sh --run-command 'update-rc.d virt-sysprep-firstboot defaults' \ --run-command 'killall dbus-daemon cgmanager ||:'