- You can now download RHEL 7.2 preview packages for virt-v2v
- Windows 8, 8.1, 2012 and 2012R2 are supported: I finally worked out the complicated series of Windows registry edits needed to make this work.
- Guests with sound cards are now supported.
- Guests with VNC display port information are now supported (thanks Pino Toscano).
- There is now a test harness for virt-v2v.
- Integration with oVirt and RHEV-M.
This video shows using the GUI to import a virtual machine from VMware to RHEV-M. It performs the conversion using virt-v2v, which is responsible for installing virtio drivers, fixing the bootloader, and so forth.
Thanks Arik Hadas. Now I just have to fix the epic RHEL 7.2 bug list — 57 bugs at last count :-(
virt-builder now has Fedora 21 ppc64 and ppc64le images available, and you can run these under emulation on an x86-64 host. Here’s how to do it:
$ virt-builder --arch ppc64 fedora-21 \ -o fedora-21-ppc64.img
$ virt-builder --arch ppc64le fedora-21 \ -o fedora-21-ppc64le.img
To boot them:
$ qemu-system-ppc64 -M pseries -cpu POWER8 -m 4096 \ -drive file=fedora-21-ppc64[le].img \ -serial stdio
Oddly the boot messages will appear on the GUI, but the login prompt will only appear on the serial console. (Fixed)
Libvirt also has support, so with a sufficiently new version of the toolchain you can also use:
$ virt-install --import --name=guestname \ --ram=4096 --vcpus=1 \ --os-type=linux --os-variant=fedora21 \ --arch=ppc64[le] --machine pseries \ --disk=fedora-21-ppc64[le].img,format=raw $ virsh start guestname
It’s quite fun to play with Big Iron, even in an emulator that runs at about 1/1000th the speed of the real thing. I know a lot about this, because we have POWER8 machines at Red Hat, and they really are the fastest computers alive, by a significant multiple. Of course, they also cost a fortune and use huge amounts of power.
Some random observations:
virt-builder --sizeparameter cannot resize the ppc64 guest filesystem correctly, because Anaconda uses an extended partition. Workaround is to either add a second disk or to create another extended partition in the extra space.
- The disks are
ibmvscsimodel (not virtio or ide). This is the default, but something to think about if you edit or create the libvirt XML manually.
Somehow the same CPU/machine model works for both Big Endian and Little Endian guests. It must somehow auto-detect the guest type, but I couldn’t work out how that works. Anyway, it just works by magic.it’s done by the kernel
- libguestfs inspection is broken for ppc64le
- Because TCG (qemu software emulation) is single threaded, only use a single vCPU. If you use more, it’ll actually slow the thing down.
Thanks: Maros Zatko for working out the virt-install command line and implementing the virt-builder script to build the images.
You have to unset these variables because of a long-standing bug in SPICE:
# unset http_proxy # unset https_proxy
You can’t use virt-install’s
--cdrom option twice, because virt-install ignores the second use of the option and only adds a single CD-ROM to the guest. Instead, use
# virt-install --name=w81-virtio --ram=4096 \ --cpu=host --vcpus=2 \ --os-type=windows --os-variant=win8.1 \ --disk /dev/VG/w81-virtio,bus=virtio \ --disk en-gb_windows_8.1_pro_n_vl_with_update_x64_dvd_6050975.iso,device=cdrom,bus=ide \ --disk /usr/share/virtio-win/virtio-win.iso,device=cdrom,bus=ide
During the install you’ll have to select the “Load driver” option and load the right viostor driver from the second CD-ROM (E:).
A few years ago Dan Berrange added a way to send fake keyboard events to libvirt guests. You can use this to inject just a press on the Left Shift key to wake up a guest from screen blank. Very useful if you need to take a screenshot!
$ virsh send-key guest KEY_LEFTSHIFT $ sleep 1 $ virsh screenshot guest /tmp/screenshot.ppm
Update: A word of warning though. If you try this for Windows guests you’ll hit this message:
The solution is to hit other keys randomly. Grrr.
See end of post for an important update
UEFI firmware has a concept of persistent variables. They are used to control the boot order amongst other things. They are stored in non-volatile RAM on the system board, or for virtual machines in a host file.
These programs don’t actually edit the varstore directly. They access the kernel
/sys/firmware/efi interface, but even the kernel doesn’t edit the varstore. It just redirects to the UEFI runtime “Variable Services”, so what is really running is UEFI code (possibly proprietary, but more usually from the open source TianoCore project).
So how can you edit varstores offline? The NVRAM file format is peculiar to say the least, and the only real specification is the code that writes it from Tianocore. So somehow you must reuse that code. To make it more complicated, the varstore NVRAM format is tied to the specific firmware that uses it, so varstores used on aarch64 aren’t compatible with those on x86-64, nor are SecureBoot varstores compatible with normal ones.
virt-efivars is an attempt to do that. It’s rather “meta”. You write a small editor program (an example is included), and virt-efivars compiles it into a tiny appliance. You then boot the appliance using qemu + UEFI firmware + varstore combination, the editor program runs and edits the varstore, using the UEFI code.
It works .. at least on aarch64 which is the only convenient machine I have that has virtualized UEFI.
After studying this problem some more, Laszlo Ersek came up with a different and better plan:
- Boot qemu with only the OVMF code & varstore attached. No OS or appliance.
- This should drop you into a UEFI shell which is accessible over qemu’s serial port.
- Send appropriate
setvarcommands to update the variables. Using
expectthis should be automatable.
An awful lot of noise and nonsense is being made about this bug. Here are a couple of facts:
- The bug was never in any released version of RHEL.
- It was caught during Red Hat’s internal QA process. The bug report is filed by a Red Hat tester.
In other words, the system works. Anyone who says this is a bug in RHEL or Red Hat is releasing buggy software that will eat your hard drive is lying to you.