Tag Archives: vmware

Read and writing VMware .vmdk disks

(This is in answer to an IRC question, but the answer is a bit longer than I can cover in IRC)

Can you read and write at the block level in a .vmdk file? I think the questioner was asking about writing a backup/restore type tool. Using only free software, qemu can do reads. You can attach qemu-nbd to a vmdk file and that will expose the logical blocks as NBD, and you can then read at the block level using libnbd:

#!/usr/bin/python3
import nbd
h = nbd.NBD()
h.connect_systemd_socket_activation(
    ["qemu-nbd", "-t", "/var/tmp/disk.vmdk"])
print("size = %d" % h.get_size())
buf = h.pread(512, 0)
$ ./qemu-test.py 
size = 1073741824

The example is in Python, but libnbd would let you do this from C or other languages just as easily.

While this works fine for reading, I wouldn’t necessarily be sure that writing is safe. The vmdk format is complex, baroque and only lightly documented, and the only implementation I’d trust is the one from VMware.

So as long as you’re prepared to use a bit of closed source software and agree with the (nasty) license, VDDK is the safer choice. You can isolate your own software from VDDK using our nbdkit plugin.

#!/usr/bin/python3
import nbd
h = nbd.NBD()
h.connect_command(
    ["nbdkit", "-s", "--exit-with-parent",
     "vddk", "libdir=/var/tmp/vmware-vix-disklib-distrib",
     "file=/var/tmp/disk.vmdk"])
print("size = %d" % h.get_size())
buf = h.pread(512, 0)
h.pwrite(buf, 512)

I quite like how we’re using small tools and assembling them together into a pipeline in just a few lines of code:

┌─────────┬────────┐          ┌─────────┬────────┐
│ your    │ libnbd │   NBD    │ nbdkit  │ VDDK   │
│ program │     ●──────────────➤        │        │
└─────────┴────────┘          └─────────┴────────┘
                                          disk.vmdk

One advantage of this approach is that it exposes the extents in the disk which you can iterate over using libnbd APIs. For a backup tool this would let you save the disk efficiently, or do change-block tracking.

1 Comment

Filed under Uncategorized

nbdkit tar filter

nbdkit is our high performance liberally licensed Network Block Device server, and OVA files are a common pseudo-standard for exporting virtual machines including their disk images.

A .ova file is really an uncompressed tar file:

$ tar tf rhel.ova
rhel.ovf
rhel-disk1.vmdk
rhel.mf

Since tar files usually store their content unmangled, this opens an interesting possibility for reading (or even writing) the embedded disk image without needing to unpack the tar. You just have to work out the offset of the disk image within the tar file. virt-v2v has used this trick to save a copy when importing OVAs for years.

nbdkit has also included a tar plugin which can access a file inside a local tar file, but the problem is what if the tar file doesn’t happen to be a local file? (eg. It’s on a webserver). Or what if it’s compressed?

To fix this I’ve turned the plugin into a filter. Using nbdkit-tar-filter you can unpack even non-local compressed tar files:

$ nbdkit curl http://example.com/qcow2.tar.xz \
         --filter=tar --filter=xz tar-entry=disk.qcow2

(To understand how filters are stacked, see my FOSDEM talk from last year). Because in this example the disk inside the tarball is a qcow2 file, it appears as qcow2 on the wire, so:

$ guestfish --ro --format=qcow2 -a nbd://localhost

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
      ‘man’ to read the manual
      ‘quit’ to quit the shell

><fs> run
><fs> list-filesystems 
/dev/sda1: ext2
><fs> mount /dev/sda1 /
><fs> ll /
total 19
drwxr-xr-x   3 root root  1024 Jul  6 20:03 .
drwxr-xr-x  19 root root  4096 Jul  9 11:01 ..
-rw-rw-r--.  1 1000 1000    11 Jul  6 20:03 hello.txt
drwx------   2 root root 12288 Jul  6 20:03 lost+found

Leave a comment

Filed under Uncategorized

nbdkit for loopback pt 3: loopback mounting VMware disks

nbdkit is a pluggable NBD server and it comes with a very wide range of plugins (of course you can also write your own). One of them is the VMware VDDK plugin, an interface between nbdkit and the very proprietary VMware VDDK library. The library allows you to read local VMware disks or access remote VMware servers. In this example I’m going to use it to loopback mount a VMDK file:

$ export LD_LIBRARY_PATH=~/tmp/vddk/vmware-vix-disklib-distrib/lib64
$ nbdkit -fv vddk \
    libdir=~/tmp/vddk/vmware-vix-disklib-distrib \
    file=/var/tmp/fedora.17.x86-64.20120529.vmdk

When loopback-mounting you must use a 512 byte sector size (see the mailing list for discussion):

# nbd-client -b 512 localhost /dev/nbd0
Warning: the oldstyle protocol is no longer supported.
This method now uses the newstyle protocol with a default export
Negotiation: ..size = 10240MB
Connected /dev/nbd0

Standard health warning: Loopback mounting any unknown disk is dangerous! You should use libguestfs instead as it protects you from harmful disks and also doesn’t require root.

It turns out this VMDK file contains a partitioned disk with one partition:

# fdisk -l /dev/nbd0
Disk /dev/nbd0: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 1024 bytes / 1024 bytes
Disklabel type: dos
Disk identifier: 0x000127ae

Device      Boot Start      End  Sectors Size Id Type
/dev/nbd0p1       2048 20971519 20969472  10G 83 Linux

and it can be mounted directly and it’s fully writable:

# mount /dev/nbd0p1 /mnt
# ls -l /mnt
total 76
lrwxrwxrwx.  1 root root     7 May 29  2012 bin -> usr/bin
dr-xr-xr-x.  4 root root  4096 May 29  2012 boot
drwxr-xr-x.  2 root root  4096 Feb  3  2012 dev
drwxr-xr-x. 59 root root  4096 May 29  2012 etc
drwxr-xr-x.  2 root root  4096 Feb  3  2012 home
lrwxrwxrwx.  1 root root     7 May 29  2012 lib -> usr/lib
lrwxrwxrwx.  1 root root     9 May 29  2012 lib64 -> usr/lib64
[etc]
# touch /mnt/hello

Leave a comment

Filed under Uncategorized

virt-v2v, libguestfs and qemu remote drivers in RHEL 7

Upstream qemu can access a variety of remote disks, like NBD and Ceph. This feature is exposed in libguestfs so you can easily mount remote storage.

However in RHEL 7 many of these drivers are disabled, because they’re not stable enough to support. I was asked exactly how this works, and this post is my answer — as it’s not as simple as it sounds.

There are (at least) five separate layers involved:

qemu code What block drivers are compiled into qemu, and which ones are compiled out completely.
qemu block driver r/o whitelist A whitelist of drivers that qemu allows you to use read-only.
qemu block driver r/w whitelist A whitelist of drivers that qemu allows you to use for read and write.
libvirt What libvirt enables (not covered in this discussion).
libguestfs In RHEL we patch out some qemu remote storage types using a custom patch.

Starting at the bottom of the stack, in RHEL we use ./configure --disable-* flags to disable a few features: Ceph is disabled on !x86_64 and 9pfs is disabled everywhere. This means the qemu binary won’t even contain code for those features.

If you run qemu-img --help in RHEL 7, you’ll see the drivers which are compiled into the binary:

$ rpm -qf /usr/bin/qemu-img
qemu-img-1.5.3-92.el7.x86_64
$ qemu-img --help
[...]
Supported formats: vvfat vpc vmdk vhdx vdi ssh
sheepdog rbd raw host_cdrom host_floppy host_device
file qed qcow2 qcow parallels nbd iscsi gluster dmg
tftp ftps ftp https http cloop bochs blkverify blkdebug

Although you can use all of those in qemu-img, not all of those drivers work in qemu (the hypervisor). qemu implements two whitelists. The RHEL 7 qemu-kvm.spec file looks like this:

./configure [...]
    --block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd \
    --block-drv-ro-whitelist=vmdk,vhdx,vpc,ssh,https

The --block-drv-rw-whitelist parameter configures the drivers for which full read and write access is permitted and supported in RHEL 7. It’s quite a short list!

Even shorter is the --block-drv-ro-whitelist parameter — drivers for which only read-only access is allowed. You can’t use qemu to open these files for write. You can use these as (read-only) backing files, but you can’t commit to those backing files.

In practice what happens is you get an error if you try to use non-whitelisted block drivers:

$ /usr/libexec/qemu-kvm -drive file=test.vpc
qemu-kvm: -drive file=test.vpc: could not open disk image
test.vpc: Driver 'vpc' can only be used for read-only devices
$ /usr/libexec/qemu-kvm -drive file=test.qcow1
qemu-kvm: -drive file=test.qcow1: could not open disk
image test.qcow1: Driver 'qcow' is not whitelisted

Note that’s a qcow v1 (ancient format) file, not modern qcow2.

Side note: Only qemu (the hypervisor) enforces the whitelist. Tools like qemu-img ignore it.

At the top of the stack, libguestfs has a patch which removes support for many remote protocols. Currently (RHEL 7.2/7.3) we disable: ftp, ftps, http, https, tftp, gluster, iscsi, sheepdog, ssh. That leaves only: local file, rbd (Ceph) and NBD enabled.

virt-v2v uses a mixture of libguestfs and qemu-img to convert VMware and Xen guests to run on KVM. To access VMware we need to use https and to access Xen we use ssh. Both of those drivers are disabled in libguestfs, and only available read-only in the qemu whitelist. However that’s sufficient for virt-v2v, since all it does is add the https or ssh driver as a read-only backing file. (If you are interested in finding out more about how virt-v2v works, then I gave a talk about it at the KVM Forum which is available online).

In summary — it’s complicated.

5 Comments

Filed under Uncategorized

Tip: Read guest disks from VMware vCenter using libguestfs

virt-v2v can import guests directly from vCenter. It uses all sorts of tricks to make this fast and efficient, but the basic technique uses plain https range requests.

Making it all work was not so easy and involved a lot of experimentation and bug fixing, and I don’t think it has been documented up to now. So this post describes how we do it. As usual the code is the ultimate repository of our knowledge so you may want to consult that after reading this introduction.

Note this is read-only access. Write access is possible, but you’ll have to use ssh instead.

VMware ESXi hypervisor has a web server but doesn’t support range requests, so although you can download an entire disk image in one go from the ESXi hypervisor, to random-access the image using libguestfs you will need VMware vCenter. You should check that virsh dumpxml works against your vCenter instance by following these instructions. If that doesn’t work, it’s unlikely the rest of the instructions will work.

You will need to know:

  1. The hostname or IP address of your vCenter server,
  2. the username and password for vCenter,
  3. the name of your datacenter (probably Datacenter),
  4. the name of the datastore containing your guest (could be datastore1),
  5. .. and of course the name of your guest.

Tricky step 1 is to construct the vCenter https URL of your guest.

This looks like:

https://root:password@vcenter/folder/guest/guest-flat.vmdk?dcPath=Datacenter&dsName=datastore1

where:

root:password
username and password
vcenter
vCenter hostname or IP address
guest
guest name (repeated twice)
Datacenter
datacenter name
datastore1
datastore

Once you’ve got a URL that looks right, try to fetch the headers using curl. This step is important! not just because it checks the URL is good, but because it allows us to get a cookie which is required else vCenter will break under the load when we start to access it for real.

$ curl --insecure -I https://....
HTTP/1.1 200 OK
Date: Wed, 5 Nov 2014 19:38:32 GMT
Set-Cookie: vmware_soap_session="52a3a513-7fba-ef0e-5b36-c18d88d71b14"; Path=/; HttpOnly; Secure; 
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/octet-stream
Content-Length: 8589934592

The cookie is the vmware_soap_session=... part including the quotes.

Now let’s make a qcow2 overlay which encodes our https URL and the cookie as the backing file. This requires a reasonably recent qemu, probably 2.1 or above.

$ qemu-img create -f qcow2 /tmp/overlay.qcow2 \
    -b 'json: { "file.driver":"https",
                "file.url":"https://..",
                "file.cookie":"vmware_soap_session=\"...\"",
                "file.sslverify":"off",
                "file.timeout":1000 }'

You don’t need to include the password in the URL here, since the cookie acts as your authentication. You might also want to play with the "file.readahead" parameter. We found it makes a big difference to throughput.

Now you can open the overlay file in guestfish as usual:

$ export LIBGUESTFS_BACKEND=direct
$ guestfish
><fs> add /tmp/overlay.qcow2 copyonread:true
><fs> run
><fs> list-filesystems
/dev/sda1: ext4
><fs> mount /dev/sda1 /

and so on.

4 Comments

Filed under Uncategorized

New in libguestfs 1.27.34 – virt-v2v and virt-p2v

There haven’t been too many updates around here for a while, and that’s for a very good reason: I’ve been “heads down” writing the new versions of virt-v2v and virt-p2v, our tools for converting VMware and Xen virtual machines, or physical machines, to run on KVM.

The new virt-v2v [manual page] can slurp in a guest from a local disk image, local Xen, VMware vCenter, or (soon) an OVA file — convert it to run on KVM — and write it out to RHEV-M, OpenStack Glance, local libvirt or as a plain disk image.

It’s easy to use too. Unlike the old virt-v2v there are no hairy configuration files to edit or complicated preparations. You simply do:

$ virt-v2v -i disk xen_disk.img -o local -os /tmp

That command (which doesn’t need root, naturally) takes the Xen disk image, which could be any supported Windows or Enterprise Linux distro, converts it to run on KVM (eg. installing virtio drivers, adjusting dozens of configuration files), and writes it out to /tmp.

To connect to a VMware vCenter server, change the -i options to:

$ virt-v2v -ic vpx://vcenter/Datacenter/esxi "esx guest name" [-o ...]

To output the converted disk image to OpenStack glance, change the -o options to:

$ virt-v2v [-i ...] -o glance [-on glance_image_name]

Coming up: The new technology we’ve used to make virt-v2v much faster.

20 Comments

Filed under Uncategorized

Notes on getting VMware ESXi to run under KVM

This is mostly adapted from this long thread on the VMware community site.

I got VMware ESXi 5.5.0 running on upstream KVM today.

First I had to disable the “VMware backdoor”. When VMware runs, it detects that qemu underneath is emulating this port and tries to use it to query the machine (instead of using CPUID and so on). Unfortunately qemu’s emulation of the VMware backdoor is very half-assed. There’s no way to disable it except to patch qemu:

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index eaf3e61..ca1c422 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -204,7 +204,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
     pc_vga_init(isa_bus, pci_enabled ? pci_bus : NULL);
 
     /* init basic PC hardware */
-    pc_basic_device_init(isa_bus, gsi, &rtc_state, &floppy, xen_enabled(),
+    pc_basic_device_init(isa_bus, gsi, &rtc_state, &floppy, 1,
         0x4);
 
     pc_nic_init(isa_bus, pci_bus);

It would be nice if this was configurable in qemu. This is now being fixed upstream.

Secondly I had to turn off MSR emulation. This is, unfortunately, a machine-wide setting:

# echo 1 > /sys/module/kvm/parameters/ignore_msrs
# cat /sys/module/kvm/parameters/ignore_msrs
Y

Thirdly I had to give the ESXi virtual machine an IDE disk and an e1000 vmxnet3 network card. Note also that ESXi requires ≥ 2 vCPUs and at least 2 GB of RAM.

Screenshot - 190514 - 16:04:38

Screenshot - 190514 - 16:21:09

9 Comments

Filed under Uncategorized

VMware VDDK plugin for nbdkit

VDDK is a horribly proprietary library that VMware has released to let you open VMDK files and access the disks of ESX servers. It has some NBD capability already, but that’s not stopped me from creating a VDDK plugin for nbdkit so you can access VMware resources over NBD.

Make sure you read the README.VDDK file first …

1 Comment

Filed under Uncategorized

Tip: Use libguestfs on VMware ESX guests

You can use libguestfs, guestfish and the virt tools on VMware ESX guests quite easily. However it’s not obvious how to do it, so this post explains that.

You will need:

  • libguestfs tools installed on a Linux machine
  • sshfs installed on the same Linux machine
  • ssh access to the VMware ESX storage (find the root password from the administrator)
  • the name of the guest and the name of the storage volume that the guest is stored on

The guest must be shut down (more on this later).

First of all, make sure you are able to ssh as root to the VMware ESX storage. It will look something like this:

$ ssh root@vmware
root@vmware's password: ****
Last login: Wed May  4 20:47:50 2011 from [...]
[root@vmware ~]# ls -l /vmfs/
total 1
drwxr-xr-x 1 root root 512 May 10 09:22 devices
drwxr-xr-x 1 root root 512 May 10 09:22 volumes

Now you should create a temporary mount point, and mount /vmfs from the VMware ESX storage server using sshfs. The command is quite simple and you don’t need to be root on the Linux side:

$ mkdir /tmp/vmfs
$ sshfs root@vmware:/vmfs /tmp/vmfs
root@vmware's password: ****
$

In another window you can navigate to the guest. For example if the guest was called “test” and it lived on volume “Storage1” then:

$ cd /tmp/vmfs/volumes/Storage1/test
$ ls -l
total 1718720
-rw------- 1 root root 8589934592 May 10 09:48 test-flat.vmdk
-rw------- 1 root root       8684 May 10 09:37 test.nvram
-rw------- 1 root root        469 Apr  4 08:16 test.vmdk
-rw------- 1 root root          0 May 11  2010 test.vmsd
-rwxr-xr-x 1 root root       2666 May 10 09:37 test.vmx
-rw------- 1 root root        259 May 11  2010 test.vmxf
-rw-r--r-- 1 root root      53966 May 11  2010 vmware-1.log
-rw-r--r-- 1 root root      78771 May 11  2010 vmware-2.log
-rw-r--r-- 1 root root      56483 Apr  4 08:15 vmware-3.log
-rw-r--r-- 1 root root      56305 May 10 09:37 vmware.log

The critical file is guestname-flat.vmdk which is the flat disk image. You can just open this for read or write using guestfish, virt-df, virt-filesystems or other libguestfs tools or programs.

For example:

$ guestfish --rw -i -a test-flat.vmdk

Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

Operating system: Red Hat Enterprise Linux Server release 5.5 (Tikanga)
/dev/VolGroup00/LogVol00 mounted on /
/dev/vda1 mounted on /boot

><fs> touch /tmp/hello
><fs> ll /tmp
total 20
drwxrwxrwt.  3 root root 4096 May 10 14:48 .
drwxr-xr-x. 24 root root 4096 May 10 14:36 ..
drwxrwxrwt   2 root root 4096 Apr  4 13:16 .ICE-unix
-rw-r--r--   1 root root    0 May 10 14:48 hello

Notice that guestfish determined the guest operating system and lets you edit the disk.

$ virt-filesystems -a test-flat.vmdk --all --long -h
Name                     Type       VFS  Label Size Parent
/dev/sda1                filesystem ext3 /boot 102M -
/dev/VolGroup00/LogVol00 filesystem ext3 -     7.1G -
/dev/VolGroup00/LogVol01 filesystem swap -     768M -
/dev/VolGroup00/LogVol00 lv         -    -     7.1G /dev/VolGroup00
/dev/VolGroup00/LogVol01 lv         -    -     768M /dev/VolGroup00
/dev/VolGroup00          vg         -    -     7.9G -
/dev/sda2                pv         -    -     7.9G -
/dev/sda1                partition  -    -     102M /dev/sda
/dev/sda2                partition  -    -     7.9G /dev/sda
/dev/sda                 device     -    -     8.0G -
$ virt-df -a test-flat.vmdk -h
Filesystem                                Size       Used  Available  Use%
test-flat.vmdk:/dev/sda1                   99M        12M        81M   13%
test-flat.vmdk:/dev/VolGroup00/LogVol00
                                          6.9G       1.1G       5.5G   16%

With libguestfs we usually allow you to read guests which are running. The results might be inconsistent at times, but it generally works. However VMware itself doesn’t allow running guests to be read. If the guest is running you can see that VMware prevents access:

# file test-flat.vmdk
test-flat.vmdk: writable, regular file, no read permission

Whereas when the same guest is shut down, reads (and writes) are allowed:

# file test-flat.vmdk
test-flat.vmdk: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 208782 sectors; partition 2: ID=0x8e, starthead 0, startsector 208845, 16563015 sectors, code offset 0x48

This is a limitation of VMware and nothing to do with libguestfs.

A note on performance: I run this from my home to a VMware server which is a third of the way around the planet over plain 2Mbps ADSL. It’s noticeably slower than accessing local disk images, but still very usable. sshfs appears to be very efficiently implemented. It is far faster and more convenient than copying the whole disk image around.

6 Comments

Filed under Uncategorized

Tip: Install a device driver in a Windows VM

Previously we looked at how to install a service in a Windows VM. You can use that technique or the RunOnce tip to install some device drivers too.

But what if Windows needs the device driver in order to boot? This is the problem we faced with converting old Xen and VMWare guests to use KVM. You can’t install viostor (the virtio disk driver) which KVM needs either on the source Xen/VMWare hypervisors (because those don’t use the virtio standard) or on the destination KVM hypervisor (because Windows needs to be able to see the disk first in order to be able to boot).

Nevertheless we can modify the Windows VM off line using libguestfs to install the virtio device driver and allow it to boot.

(Note: virt-v2v will do this for you. This article is for those interested in how it works).

There are three different aspects to installing a device driver in Windows. Two of these are Windows Registry changes, and one is to install the .SYS file (the device driver itself).

So first we make the two Registry changes. Device drivers are a bit like services under Windows, so the first change looks like installing a service in a Windows guest. The second Registry change adds viostor to the “critical device database”, a map of PCI addresses to device drivers used by Windows at boot time:

# virt-win-reg --merge Windows7x64

;
; Add the viostor service
;

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor]
"Group"="SCSI miniport"
"ImagePath"=hex(2):73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,64,\
  00,72,00,69,00,76,00,65,00,72,00,73,00,5c,00,76,00,69,00,6f,00,73,00,74,00,6f,\
  00,72,00,2e,00,73,00,79,00,73,00,00,00
"ErrorControl"=dword:00000001
"Start"=dword:00000000
"Type"=dword:00000001
"Tag"=dword:00000040

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor\Parameters]
"BusType"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor\Parameters\MaxTransferSize]
"ParamDesc"="Maximum Transfer Size"
"type"="enum"
"default"="0"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor\Parameters\MaxTransferSize\enum]
"0"="64  KB"
"1"="128 KB"
"2"="256 KB"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor\Parameters\PnpInterface]
"5"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor\Enum]
"0"="PCI\\VEN_1AF4&DEV_1001&SUBSYS_00021AF4&REV_00\\3&13c0b0c5&2&20"
"Count"=dword:00000001
"NextInstance"=dword:00000001

;
; Add viostor to the critical device database
;

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\PCI#VEN_1AF4&DEV_1001&SUBSYS_00000000]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\PCI#VEN_1AF4&DEV_1001&SUBSYS_00020000]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"

[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\PCI#VEN_1AF4&DEV_1001&SUBSYS_00021AF4]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"

Comparatively speaking, the second step of uploading viostor.sys to the right place in the image is simple:

# guestfish -i Windows7x64
><fs> upload viostor.sys /Windows/System32/drivers/viostor.sys

After that, the Windows guest can be booted on KVM using virtio. In virt-v2v we then reinstall the viostor driver (along with other drivers like the virtio network driver) so that we can be sure they are all installed correctly.

12 Comments

Filed under Uncategorized