Tag Archives: virtio


How do you talk to a virtual machine from the host? How does the virtual machine talk to the host? In one sense the answer is obvious: virtual machines should be thought of just like regular machines so you use the network. However the connection between host and guest is a bit more special. Suppose you want to pass a host directory up to the guest? You could use NFS, but that’s sucky to set up and you’ll have to fiddle around with firewalls and ports. Suppose you run a guest agent reporting stats back to the hypervisor. How do they talk? Network, sure, but again that requires an extra network interface and the guest has to explicitly set up firewall rules.

A few years ago my colleague Stefan Hajnoczi ported VMware’s vsock to qemu. It’s a pure guest⟷host (and guest⟷guest) sockets API. It doesn’t use regular networks so no firewall issues or guest network configuration to worry about.

You can run NFS over vsock [PDF] if you want.

And now you can of course run NBD over vsock. nbdkit supports it, and libnbd is (currently the only!) client.

Leave a comment

Filed under Uncategorized

Tip: virt-install Windows with virtio device drivers

You have to unset these variables because of a long-standing bug in SPICE:

# unset http_proxy
# unset https_proxy

You can’t use virt-install’s --cdrom option twice, because virt-install ignores the second use of the option and only adds a single CD-ROM to the guest. Instead, use --disk ...,device=cdrom,bus=ide:

# virt-install --name=w81-virtio --ram=4096 \
    --cpu=host --vcpus=2 \
    --os-type=windows --os-variant=win8.1 \
    --disk /dev/VG/w81-virtio,bus=virtio \
    --disk en-gb_windows_8.1_pro_n_vl_with_update_x64_dvd_6050975.iso,device=cdrom,bus=ide \
    --disk /usr/share/virtio-win/virtio-win.iso,device=cdrom,bus=ide

During the install you’ll have to select the “Load driver” option and load the right viostor driver from the second CD-ROM (E:).

1 Comment

Filed under Uncategorized

How are Linux drives named beyond drive 26 (/dev/sdz, ..)?

[Edit: Thanks to adrianmonk for correcting my math]

It’s surprisingly hard to find a definitive answer to the question of what happens with Linux block device names when you get past drive 26 (ie. counting from one, the first disk is /dev/sda and the 26th disk is /dev/sdz, what comes next?) I need to find out because libguestfs is currently limited to 25 disks, and this really needs to be fixed.

Anyhow, looking at the code we can see that it depends on which driver is in use.

For virtio-blk (/dev/vd*) the answer is:

Drive # — Name
1 vda
26 vdz
27 vdaa
28 vdab
52 vdaz
53 vdba
54 vdbb
702 vdzz
703 vdaaa
704 vdaab
18278 vdzzz

Beyond 18278 drives the virtio-blk code would fail, but that’s not currently an issue.

For SATA and SCSI drives under a modern Linux kernel, the same as above applies except that the code to derive names works properly beyond sdzzz up to (in theory) sd followed by 29 z‘s! [Edit: or maybe not?]

As you can see virtio and SCSI/SATA don’t use common code to name disks. In fact there are also many other block devices in the kernel, all using their own naming scheme. Most of these use numbers instead of letters: eg: /dev/loop0, /dev/ram0, /dev/mmcblk0 and so on.

If disks are partitioned, then the partitions are named by adding the partition number on the end (counting from 1). But if the drive name already ends with a number then a letter p is added between the drive name and the partition number, thus: /dev/mmcblk0p1.


Filed under Uncategorized

Virtio balloon

After someone asked me a question about “balloons” (in the virtualization sense) today, I noticed that there is not very much documentation around. This post explains what the KVM virtio_balloon driver is all about.

First of all, what is a balloon driver if you’ve never even heard of the concept? It’s a way to give or take RAM from a guest. (In theory at least), if your guest needs more RAM, you can use the balloon driver to give it more RAM. Or if the host needs to take RAM away from guests, it can do so. All of this is done without needing to pause or reboot the guest.

You might think that this would work as a RAM “hot add” feature, rather like hot adding disks to a guest. Although RAM hot add would (IMHO) be much better, currently this is not how ballooning works.

What we have is a kernel driver inside the guest called virtio_balloon. This driver acts like a kind of weird process, either expanding its own memory usage or shrinking down to nearly nothing, as in the diagrams below:

When the balloon driver expands, normal applications running in the guest suddenly have a lot less memory and the guest does the usual things it does when there’s not much memory, including swapping stuff out and starting up the OOM killer. (The balloon itself is non-swappable and un-killable in case you were wondering).

So what’s the point of a kernel driver which wastes memory? There are two points: Firstly, the driver communicates with the host (over the virtio channel), and the host gives it instructions (“expand to this size”, “shrink down now”). The guest cooperates, but doesn’t directly control the balloon.

Secondly, memory pages in the balloon are unmapped from the guest and handed back to the host, so the host can hand them out to other guests. It’s like the guest’s memory has a chunk missing from it:

Libvirt has two settings you can control called currentMemory and maxMemory (“memory” in the libvirt XML):

maxMemory (or just <memory>) is the memory allocated at boot time to a guest. KVM and Xen guests currently cannot exceed this. currentMemory controls what memory you’re requesting to give to the guest’s applications. The balloon fills the rest of the memory and gives it back to the host for the host to use elsewhere.

You can adjust this manually for your guests, either by editing the XML, or by using the virsh setmem command.


Filed under Uncategorized

Tip: Install a device driver in a Windows VM

Previously we looked at how to install a service in a Windows VM. You can use that technique or the RunOnce tip to install some device drivers too.

But what if Windows needs the device driver in order to boot? This is the problem we faced with converting old Xen and VMWare guests to use KVM. You can’t install viostor (the virtio disk driver) which KVM needs either on the source Xen/VMWare hypervisors (because those don’t use the virtio standard) or on the destination KVM hypervisor (because Windows needs to be able to see the disk first in order to be able to boot).

Nevertheless we can modify the Windows VM off line using libguestfs to install the virtio device driver and allow it to boot.

(Note: virt-v2v will do this for you. This article is for those interested in how it works).

There are three different aspects to installing a device driver in Windows. Two of these are Windows Registry changes, and one is to install the .SYS file (the device driver itself).

So first we make the two Registry changes. Device drivers are a bit like services under Windows, so the first change looks like installing a service in a Windows guest. The second Registry change adds viostor to the “critical device database”, a map of PCI addresses to device drivers used by Windows at boot time:

# virt-win-reg --merge Windows7x64

; Add the viostor service

"Group"="SCSI miniport"


"ParamDesc"="Maximum Transfer Size"

"0"="64  KB"
"1"="128 KB"
"2"="256 KB"



; Add viostor to the critical device database




Comparatively speaking, the second step of uploading viostor.sys to the right place in the image is simple:

# guestfish -i Windows7x64
><fs> upload viostor.sys /Windows/System32/drivers/viostor.sys

After that, the Windows guest can be booted on KVM using virtio. In virt-v2v we then reinstall the viostor driver (along with other drivers like the virtio network driver) so that we can be sure they are all installed correctly.


Filed under Uncategorized

Time filesystem operations using guestfish

A new feature we added to guestfish this week was the ability to time individual operations. We added this after we found a bug in the virtio block drivers where operations like mkfs.ext2 would run much more slowly than using the emulated IDE drivers in KVM.

It’s quite hard to characterize these problems, unless you can write short scripts to predictably reproduce them, and now with guestfish you can write such scripts.

You will need guestfish >= 1.0.55 to try out the examples below.

To demonstrate the mkfs.ext2 problem, save this script to a file and chmod +x it:

#!/usr/bin/guestfish -f
!dd if=/dev/zero of=/tmp/test.img bs=1024k count=100
config -drive file=/tmp/test.img,cache=off,if=virtio
#append elevator=noop
debug sh "for f in /sys/block/[hsv]d*/queue/rotational; do echo 0 > $f; done"
sfdiskM /dev/sda ,
time mkfs ext2 /dev/sda1

Run the script on the host (you don’t need to be root), and it will print the elapsed time of just the final mkfs command.

You can now play with variations on if=virtio|ide|scsi, different elevator=noop|cfq|.. algorithms in the guest, and writing 0 or 1 to the rotational knob.

For example:

if=virtio, (default elevator), rotational=0: mkfs takes 2.86 seconds*
if=virtio, (default elevator), rotational=1: mkfs takes 0.20 seconds
if=ide, (default elevator), rotational=0: mkfs takes 0.14 seconds
if=ide, (default elevator), rotational=1: mkfs takes 0.15 seconds
if=ide, elevator=noop, rotational=1: mkfs takes 0.20 seconds

* This dismal number is what caused us to file the original bug.

This is, I think, a relatively easy way to try out the effects of different combinations of drivers and settings on particular filesystem operations. Each test run takes around 30 seconds or so, and in just a few minutes you can easily explore quite a range of settings.

Leave a comment

Filed under Uncategorized