Put it in your calendars .. May 28th is Fedora 19 virtualization test day.
Every day is libguestfs test day. Just follow the instructions here.
Thursday (1st Nov) is Fedora virtualization test day. Help us out by testing libguestfs!
Fedora 18 has definitely been a struggle. It is possibly the most delayed Fedora release ever. In libguestfs (in Fedora only) we switched to using libvirt to launch the appliance, revealing a lot of bugs and problems in libvirt in the process.
At the same time we’ve added dozens of major new features to libguestfs.
So there’s likely to be a lot of bugs, and you can make a difference.
Get together to test all aspects of Fedora virt (including libguestfs) with like-minded people in about a month’s time.
More information here: https://fedoraproject.org/wiki/Test_Day:2012-11-01_Virtualization
We’re rerunning the test day again on Tuesday 25th Sept. This is not for lack of interest but because the previous day was so successful and overran a bit.
Using v2p to get around Oracle support contracts
Problem: Oracle won’t support the database in a virtualized environment. If you report a bug, they’ll ask you to reproduce it on a supported (ie. physical) machine.
Wrong solution: We’ll run Oracle in a VM. When we run into trouble, we’ll use a V2P tool to convert the virtual machine to a physical machine!
Why this is wrong: Conversion involves copying the disks, ripping out device drivers, adding new device drivers, fiddling with configuration files, doing resize ops on filesystems, and reinstalling the boot loader. These are (a) slow, (b) very intrusive, and (c) liable to break. This is all a recipe for turning a small disaster (ie. my database is down) into a very big disaster (my database is still down and the hairy support “solution” took 6 hours and didn’t work).
Good solution: Oracle are probably right that you shouldn’t try to run your database virtualized. But assuming you want to ignore that advice, put your database and its files onto a separate SAN LUN. When you need support, detach the LUN from the virtual machine and reattach it to a physical machine. This operation should be instantaneous and doesn’t involve any modification of the data.
Using p2v and v2p to test upgrades
Problem: It’s not easy to test an upgrade on a production physical machine.
Wrong solution: Virtual machines let you snapshot, test your upgrade on the snapshot, and if it’s bad you just throw away the snapshot. Therefore to test our upgrade, we’ll convert the physical machine to virtual (P2V), do the test, and if it works we’ll convert it back to a physical machine (V2P)!
Why this is wrong: Conversion involves a slow disk copy and a very intrusive set of modifications to the configuration. P2V followed by V2P is not a symmetric operation that leaves you with an identical machine. More than likely it’ll simply break the machine, and if it doesn’t, then drivers could be less than optimal after the conversion. Plus (unlike with virtualized environments) your physical machine is a one-of-a-kind system, and if you break it with a hairy set of P2V and V2P operations you can’t just roll back to a previous snapshot.
Good solution: Virtualize your workloads! If you don’t want to do that, use a filesystem like btrfs/ZFS that lets you do cheap snapshots, or use the snapshot feature of your SAN. In any case, always arrange your production environment so that you have a staging mirror on which to do tests before you deploy anything to production, and have a tested back-out plan.
Using multiple v2v steps
Problem: We don’t have a conversion tool that can do (eg.) Citrix Xen to KVM in one step.
Wrong solution: We found something on the web that can do Citrix to VMware, and Red Hat have a great tool for doing VMware to KVM, so we’ll just run one after the other!
Why this is wrong: Conversion involves a large set of intrusive changes on the guest such as installing device drivers for the particular target hypervisor. Doing this in two steps means you go through two rounds of intrusive changes to your guest, and it’s unlikely that anyone has tested both together. Most likely it’ll break, or leave your guest with conflicting device drivers.
Good solution: Sorry, but at the moment there isn’t a good solution, but that doesn’t mean you should use the bad solution. It could be your best bet is to reinstall the guest from scratch on the target VM.
The thing that stops me from running lots and lots of virtual machines is the amount of RAM I can fit into a server.
My current build server runs 9 VMs taking just over 7 GB in total. The host has 8 GB of RAM.
The 4 cores on the host are virtually idle. Certainly CPU-wise there is nothing stopping us running 16 or possibly even 32 VMs.
RAM is the problem. The server can take 16 GB maximum, at a cost (for cheap desktop RAM) of only about $400, and I get the original 8 GB back to recycle somewhere else. Unfortunately the larger RAM would be slower. For most consumer hardware 8 or 16 GB seems to be the limit anyway.
Ulrich Drepper has a very good analysis of DDR RAM, why larger sizes are necessarily slower, and why you can’t design single socket systems that take arbitrarily huge amounts of RAM.
I’m left doing workarounds: reduce the amount of RAM assigned to each guest, keep guests paused or powered off when not directly in use, move guests between servers, … Not ideal.
Update: A much easier way is to use gdbserver.
Start qemu with the following parameters:
$ qemu-system-x86_64 -s -S -m 512 -hda winxp.img
And connect with gdb like this:
$ gdb (gdb) target remote localhost:1234 (gdb) set architecture i8086 (gdb) break *0x7c00 (gdb) cont
This will breakpoint at 0x7c00, which is when the boot sector has been loaded into memory by the BIOS and control is passed to the boot sector.
You can use ordinary gdb commands to disassemble and debug the guest.
Using supermin appliances we can make some very small to download Fedora appliances. These ones are under 700K (yes, that’s “K” not “M”).
For Fedora 15, use this link:
For Fedora 16, use this link:
For Rawhide, use this link:
You will need ~600 MB free space in
/var/lib since that is where the real appliance gets built. Just install the RPM and run
sudo boot-a-fedora-appliance. Then read that script and the README file.
The new Red Hat Enterprise Linux Virtualization Getting Started Guide, which I worked on, is essential reading if you want to find out how to start out using KVM on RHEL 6.