Tag Archives: virt-tools

virt-v2v: better living through new technology

If you ever used the old version of virt-v2v, our software that converts guests to run on KVM, then you probably found it slow, but worse still it was slow and could fail at the end of the conversion (after possibly an hour or more). No one liked that, least of all the developers and support people who had to help people use it.

A V2V conversion is intrinsically going to take a long time, because it always involves copying huge disk images around. These can be gigabytes or even terabytes in size.

My main aim with the rewrite was to do all the work up front (and if the conversion is going to fail, then fail early), and leave the huge copy to the last step. The second aim was to work much harder to minimize the amount of data that we need to copy, so the copy is quicker. I achieved both of these aims using a lot of new technology that we developed for qemu in RHEL 7.

Virt-v2v works (now) by putting an overlay on top of the source disk. This overlay protects the source disk from being modified. All the writes done to the source disk during conversion (eg. modifying config files and adding device drivers) are saved into the overlay. Then we qemu-img convert the overlay to the final target. Although this sounds simple and possibly obvious, none of this could have been done when we wrote old virt-v2v. It is possible now because:

  • qcow2 overlays can now have virtual backing files that come from HTTPS or SSH sources. This allows us to place the overlay on top of (eg) a VMware vCenter Server source without having to copy the whole disk from the source first.
  • qcow2 overlays can perform copy-on-read. This means you only need to read each block of data from the source once, and then it is cached in the overlay, making things much faster.
  • qemu now has excellent discard and trim support. To minimize the amount of data that we copy, we first fstrim the filesystems. This causes the overlay to remember which bits of the filesystem are used and only copy those bits.
  • I added support for fstrim to ntfs-3g so this works for Windows guests too.
  • libguestfs has support for remote storage, cachemode, discard, copy-on-read and more, meaning we can use all these features in virt-v2v.
  • We use OCaml — not C, and not type-unsafe languages — to ensure that the compiler is helping us to find bugs in the code that we write, and also to ensure that we end up with an optimized, standalone binary that requires no runtime support/interpreters and can be shipped everywhere.

10 Comments

Filed under Uncategorized

New in libguestfs 1.27.34 – virt-v2v and virt-p2v

There haven’t been too many updates around here for a while, and that’s for a very good reason: I’ve been “heads down” writing the new versions of virt-v2v and virt-p2v, our tools for converting VMware and Xen virtual machines, or physical machines, to run on KVM.

The new virt-v2v [manual page] can slurp in a guest from a local disk image, local Xen, VMware vCenter, or (soon) an OVA file — convert it to run on KVM — and write it out to RHEV-M, OpenStack Glance, local libvirt or as a plain disk image.

It’s easy to use too. Unlike the old virt-v2v there are no hairy configuration files to edit or complicated preparations. You simply do:

$ virt-v2v -i disk xen_disk.img -o local -os /tmp

That command (which doesn’t need root, naturally) takes the Xen disk image, which could be any supported Windows or Enterprise Linux distro, converts it to run on KVM (eg. installing virtio drivers, adjusting dozens of configuration files), and writes it out to /tmp.

To connect to a VMware vCenter server, change the -i options to:

$ virt-v2v -ic vpx://vcenter/Datacenter/esxi "esx guest name" [-o ...]

To output the converted disk image to OpenStack glance, change the -o options to:

$ virt-v2v [-i ...] -o glance [-on glance_image_name]

Coming up: The new technology we’ve used to make virt-v2v much faster.

12 Comments

Filed under Uncategorized

libguestfs now works on 64 bit ARM

arm

Pictured above is my 64 bit ARM server. It’s under NDA so I cannot tell you who supplied it or even show you a proper photo.

However it runs Fedora 21 & Rawhide:

Linux arm64.home.annexia.org 3.16.0-0.rc6.git1.1.efirtcfix1.fc22.aarch64 #1 SMP Wed Jul 23 12:15:58 BST 2014 aarch64 aarch64 aarch64 GNU/Linux

libvirt and libguestfs run fine, with full KVM acceleration, although right now you have to use qemu from git as the Rawhide version of qemu is not new enough.

Also OCaml 4.02.0 beta works (after we found and fixed a few bugs in the arm64 native code generator last week).

2 Comments

Filed under Uncategorized

New(ish) in libguestfs 1.27.23 — add firstboot batch files to Windows guests

You’ve been able to do this for a while by hand but now virt-sysprep & virt-customize ≥ 1.27.23 let you easily install firstboot scripts into Windows guests:

$ cat /tmp/test.bat
echo Hello I am a batch file
$ virt-customize -a win7.qcow2 --firstboot /tmp/test.bat

Next time the guest boots, check the log file in C:\Program Files\Red Hat\Firstboot\log.txt

This works well for me in Windows 7 guests. It ought to work in other Windows guests too. So far the only other Windows flavour I tested was W2K3 where the service crashed for some unfathomable reason (I’m not very patient with debugging Windows problems).

So let us know how it goes and we’ll try to fix the bugs as we go along.

Leave a comment

Filed under Uncategorized

Tip: Use gdbserver to debug qemu running under libguestfs

If qemu crashes or fails when run under libguestfs, it can be a bit hard to debug things. However a small qemu wrapper and gdbserver can help.

Create a file called qemu-wrapper chmod +x and containing:

#!/bin/bash -

if ! echo "$@" | grep -sqE -- '-help|-version|-device \?' ; then
  gdbserver="gdbserver :1234"
fi

exec $gdbserver /usr/bin/qemu-system-x86_64 "$@"

Set your environment variables so libguestfs will use the qemu wrapper instead of running qemu directly:

$ export LIBGUESTFS_BACKEND=direct
$ export LIBGUESTFS_HV=/path/to/qemu-wrapper

Now we run guestfish or another virt tool as normal:

$ guestfish -a /dev/null -v -x run

When qemu starts up, gdbserver will run and halt the process, printing:

Listening on port 1234

At this point you can connect gdb:

$ gdb
(gdb) file /usr/bin/qemu-system-x86_64
(gdb) target remote tcp::1234
set breakpoints etc here
(gdb) cont

Leave a comment

Filed under Uncategorized

CentOS 7

CentOS 7 is out.

Although there are no cloud images available right now, I have put up a virt-builder image, so you can install CentOS 7 right now just by doing:

$ virt-builder centos-7.0

Leave a comment

Filed under Uncategorized

Super-nested KVM

Regular readers of this blog will of course be familiar with the joys of virtualization. One of those joys is nested virtualization — running a virtual machine in a virtual machine. Nested KVM is a thing too — that is, emulating the virtualization extensions in the CPU so that the second level guest gets at least some of the acceleration benefits that a normal first level guest would get.

My question is: How deeply can you nest KVM?

This is not so easy to test at the moment, so I’ve created a small project / disk image which when booted on KVM will launch a nested guest, which launches a nested guest, and so on until (usually) the host crashes, or you run out of memory, or your patience is exhausted by the poor performance of nested KVM.

The answer, by the way, is just 3 levels [on AMD hardware], which is rather disappointing. Hopefully this will encourage the developers to take a closer look at the bugs in nested virt.

Git repo: http://git.annexia.org/?p=supernested.git;a=summary
Binary images: http://oirase.annexia.org/supernested/

How does this work?

Building a simple appliance is easy. I’m using supermin to do that.

The problem is how does the appliance run another appliance? How do you put the same appliance inside the appliance? Obviously that’s impossible (right?)

The way it works is inside the Lx hypervisor it runs the L(x+1) qemu on /dev/sda, with a protective overlay stored in memory so we don’t disrupt the Lx hypervisor. Since /dev/sda literally is the appliance disk image, this all kinda works.

2 Comments

Filed under Uncategorized