libguestfs now works on 64 bit ARM

arm

Pictured above is my 64 bit ARM server. It’s under NDA so I cannot tell you who supplied it or even show you a proper photo.

However it runs Fedora 21 & Rawhide:

Linux arm64.home.annexia.org 3.16.0-0.rc6.git1.1.efirtcfix1.fc22.aarch64 #1 SMP Wed Jul 23 12:15:58 BST 2014 aarch64 aarch64 aarch64 GNU/Linux

libvirt and libguestfs run fine, with full KVM acceleration, although right now you have to use qemu from git as the Rawhide version of qemu is not new enough.

Also OCaml 4.02.0 beta works (after we found and fixed a few bugs in the arm64 native code generator last week).

Leave a comment

Filed under Uncategorized

New(ish) in libguestfs 1.27.23 — add firstboot batch files to Windows guests

You’ve been able to do this for a while by hand but now virt-sysprep & virt-customize ≥ 1.27.23 let you easily install firstboot scripts into Windows guests:

$ cat /tmp/test.bat
echo Hello I am a batch file
$ virt-customize -a win7.qcow2 --firstboot /tmp/test.bat

Next time the guest boots, check the log file in C:\Program Files\Red Hat\Firstboot\log.txt

This works well for me in Windows 7 guests. It ought to work in other Windows guests too. So far the only other Windows flavour I tested was W2K3 where the service crashed for some unfathomable reason (I’m not very patient with debugging Windows problems).

So let us know how it goes and we’ll try to fix the bugs as we go along.

Leave a comment

Filed under Uncategorized

Tip: Use gdbserver to debug qemu running under libguestfs

If qemu crashes or fails when run under libguestfs, it can be a bit hard to debug things. However a small qemu wrapper and gdbserver can help.

Create a file called qemu-wrapper chmod +x and containing:

#!/bin/bash -

if ! echo "$@" | grep -sqE -- '-help|-version|-device \?' ; then
  gdbserver="gdbserver :1234"
fi

exec $gdbserver /usr/bin/qemu-system-x86_64 "$@"

Set your environment variables so libguestfs will use the qemu wrapper instead of running qemu directly:

$ export LIBGUESTFS_BACKEND=direct
$ export LIBGUESTFS_HV=/path/to/qemu-wrapper

Now we run guestfish or another virt tool as normal:

$ guestfish -a /dev/null -v -x run

When qemu starts up, gdbserver will run and halt the process, printing:

Listening on port 1234

At this point you can connect gdb:

$ gdb
(gdb) file /usr/bin/qemu-system-x86_64
(gdb) target remote tcp::1234
set breakpoints etc here
(gdb) cont

Leave a comment

Filed under Uncategorized

Baremetal Raspberry Pi OS based on JONESFORTH

Go here: https://github.com/organix/pijFORTHos

Leave a comment

Filed under Uncategorized

Which services need restarting after an upgrade?

After you’ve run yum update to upgrade libraries, there may be services running which are still using the old copies of libraries. Such services might still be vulnerable to security bugs in the old libraries.

It’s relatively easy to discover which processes are affected using lsof to list processes using deleted files:

# lsof | awk '$5 == "DEL" { print }'
auditd     1001  1001 root DEL REG /usr/lib64/libnss_files-2.18.so;53bd9626
libvirtd   1468  1509 root DEL REG /usr/lib64/libnss_files-2.18.so;53bd9626
[lots more output]

If you actually run this command after updating (say) glibc, you’ll get pages and pages of output which is hard to sift through.

However with systemd we can map the process IDs to services and user sessions.

That’s what the following script does:

http://oirase.annexia.org/rwmj.wp.com/needs-restart.pl

Typical output looks like this:

In order to complete the installation of glibc-2.18-11.fc20.x86_64,
you should restart the following services:

    - accounts-daemon.service - Accounts Service   
    - console-kit-daemon.service - Console Manager
    - udisks2.service - Disk Manager
    - auditd.service - Security Auditing Service
    - dbus.service - D-Bus System Message Bus
    - rtkit-daemon.service - RealtimeKit Scheduling Policy Service
    - upower.service - Daemon for power management
    - colord.service - Manage, Install and Generate Color Profiles
    - firewalld.service - firewalld - dynamic firewall daemon
    - polkit.service - Authorization Manager
    - rsyslog.service - System Logging Service 
    - NetworkManager.service - Network Manager   
    - libvirtd.service - Virtualization daemon
    - gdm.service - GNOME Display Manager

In order to complete the installation of glibc-2.18-11.fc20.x86_64,
you should tell the following users to log out and log in:

    - session-1.scope - Session 1 of user rjones

13 Comments

Filed under Uncategorized

CentOS 7

CentOS 7 is out.

Although there are no cloud images available right now, I have put up a virt-builder image, so you can install CentOS 7 right now just by doing:

$ virt-builder centos-7.0

Leave a comment

Filed under Uncategorized

Super-nested KVM

Regular readers of this blog will of course be familiar with the joys of virtualization. One of those joys is nested virtualization — running a virtual machine in a virtual machine. Nested KVM is a thing too — that is, emulating the virtualization extensions in the CPU so that the second level guest gets at least some of the acceleration benefits that a normal first level guest would get.

My question is: How deeply can you nest KVM?

This is not so easy to test at the moment, so I’ve created a small project / disk image which when booted on KVM will launch a nested guest, which launches a nested guest, and so on until (usually) the host crashes, or you run out of memory, or your patience is exhausted by the poor performance of nested KVM.

The answer, by the way, is just 3 levels [on AMD hardware], which is rather disappointing. Hopefully this will encourage the developers to take a closer look at the bugs in nested virt.

Git repo: http://git.annexia.org/?p=supernested.git;a=summary
Binary images: http://oirase.annexia.org/supernested/

How does this work?

Building a simple appliance is easy. I’m using supermin to do that.

The problem is how does the appliance run another appliance? How do you put the same appliance inside the appliance? Obviously that’s impossible (right?)

The way it works is inside the Lx hypervisor it runs the L(x+1) qemu on /dev/sda, with a protective overlay stored in memory so we don’t disrupt the Lx hypervisor. Since /dev/sda literally is the appliance disk image, this all kinda works.

2 Comments

Filed under Uncategorized