Tag Archives: libguestfs

nbdkit now supports cURL — HTTP, FTP, and SSH connections

nbdkit is a liberally licensed NBD (Network Block Device) server designed to let you connect all sorts of crazy disk images sources (like Amazon, Glance, VMware VDDK) to the universal network protocol for sharing disk images: NBD.

New in nbdkit 1.1.8: cURL support. This lets you turn any HTTP, FTP, TFTP or SSH server that hosts a disk image into an NBD server.

For example:

$ nbdkit -r curl url=http://onuma/scratch/boot.iso

and then you can read the disk image using guestfish, qemu or any other nbd client:

$ guestfish --ro -a nbd://localhost -i

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

/dev/sda mounted on /

><fs> _

If you are using a normal SSH server like OpenSSH which supports the SSH File Transfer Protocol (aka SFTP), then you can use SFTP to access images:

$ nbdkit -r curl url=sftp://rjones@localhost/~/fedora-20.img

I’m hoping to enable write support in a future version.

It doesn’t work at the moment because I haven’t worked out how to switch between read (GET) and write (POST) requests in a single cURL handle. Perhaps I need to use two handles? The documentation is confusing.

2 Comments

Filed under Uncategorized

virt-log now supports the Windows Event Log

New virt tool virt-log now supports the Windows Event Log. If you have a recent Windows guest you can display the System event log by doing:

$ virt-log -d Win8 | less

What you will see is a very long XML file.

This requires an Evtx parser. I have now chosen this library for Fedora (it needs a reviewer, as you can see). The code is sensible and maintained.

It also only works for Windows ≥ Vista, because Microsoft completely rewrote the way that log files are stored, from one strange binary format to another strange binary format [so a little different from the systemd journal ...].

As usual, patches to virt-log to support other guest operating systems are welcome.

Leave a comment

Filed under Uncategorized

New in libguestfs: virt-log

In libguestfs ≥ 1.27.17, there’s a new tool called virt-log for displaying the log files from a disk image or virtual machine:

$ virt-log -a disk.img | less

Previously you could write:

$ virt-cat -a disk.img /var/log/messages

That worked for some Linux guests, but several things happened:

Virt-log is designed to do the right thing automatically (although at the moment Windows support is not finished). In particular it will automatically decode and display the systemd journal, and it knows the different locations that some Linux distros store their plain text log files.

4 Comments

Filed under Uncategorized

libguestfs RHEL 7.1 preview packages (yes, really)

RHEL 7 isn’t out yet, but if you’re using the the RHEL 7 RC, you’re on one of our beta programs, or you can wait for RHEL or CentOS 7.0 to be released, then you can upgrade libguestfs with these RHEL 7.1 libguestfs preview packages.

Amongst the new features:

Leave a comment

Filed under Uncategorized

Quick tip: Create a CentOS 6 guest with EPEL packages

You can use virt-builder [≥ 1.26] to create guests with packages from other repositories, like this:

$ virt-builder centos-6 \
    --run-command 'rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm' \
    --update \
    --install cloud-utils,cloud-init

(cloud-utils & cloud-init are examples of packages that are only available in EPEL)

3 Comments

Filed under Uncategorized

virt-builder RHEL 7 release candidate

You can now install RHEL 7 release candidate (very unofficially) through virt-builder on Fedora 20).

Just do:

$ virt-builder rhel-7rc
[   0.0] Downloading: ***
[   1.0] Planning how to build this image
[   1.0] Uncompressing
[   6.0] Opening the new disk
[  53.0] Setting a random seed
[  53.0] Setting passwords
Setting random password of root to ***
[  53.0] Finishing off
Output: rhel-7rc.img
Output size: 6.0G
Output format: raw
Total usable space: 4.8G
Free space: 4.0G (82%)

To be honest with you I couldn’t get networking to work, so if it works at all for you then let us know how. The network worked once I supplied the right qemu options.

Leave a comment

Filed under Uncategorized

Caseless virtualization cluster, part 4

AMD supports nested virtualization a bit more reliably than Intel, which was one of the reasons to go for AMD processors in my virtualization cluster. (The other reason is they are much cheaper)

But how well does it perform? Not too badly as it happens.

I tested this by creating a Fedora 20 guest (the L1 guest). I could create a nested (L2) guest inside that, but a simpler way is to use guestfish to carry out some baseline performance measurements. Since libguestfs is creating a short-lived KVM appliance, it benefits from hardware virt acceleration when available. And since libguestfs ≥ 1.26, there is a new option that lets you force software emulation so you can easily test the effect with & without hardware acceleration.

L1 performance

Let’s start on the host (L0), measuring L1 performance. Note that you have to run the commands shown at least twice, both because supermin will build and cache the appliance first time and because it’s a fairer test of hardware acceleration if everything is cached in memory.

This AMD hardware turns out to be pretty good:

$ time guestfish -a /dev/null run
real	0m2.585s

(2.6 seconds is the time taken to launch a virtual machine, all its userspace and a daemon, then shut it down. I’m using libvirt to manage the appliance).

Forcing software emulation (disabling hardware acceleration):

$ time LIBGUESTFS_BACKEND_SETTINGS=force_tcg guestfish -a /dev/null run
real	0m9.995s

L2 performance

Inside the L1 Fedora guest, we run the same tests. Note this is testing L2 performance (the libguestfs appliance running on top of an L1 guest), ie. nested virt:

$ time guestfish -a /dev/null run
real	0m5.750s

Forcing software emulation:

$ time LIBGUESTFS_BACKEND_SETTINGS=force_tcg guestfish -a /dev/null run
real	0m9.949s

Conclusions

These are just some simple tests. I’ll be doing something more comprehensive later. However:

  1. First level hardware virtualization performance on these AMD chips is excellent.
  2. Nested virt is about 40% of non-nested speed.
  3. TCG performance is slower as expected, but shows that hardware virt is being used and is beneficial even in the nested case.

Other data

The host has 8 cores and 16 GB of RAM. /proc/cpuinfo for one of the host cores is:

processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 21
model		: 2
model name	: AMD FX(tm)-8320 Eight-Core Processor
stepping	: 0
microcode	: 0x6000822
cpu MHz		: 1400.000
cache size	: 2048 KB
physical id	: 0
siblings	: 8
core id		: 0
cpu cores	: 4
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1
bogomips	: 7031.39
TLB size	: 1536 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro

The L1 guest has 1 vCPU and 4 GB of RAM. /proc/cpuinfo in the guest:

processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 21
model		: 2
model name	: AMD Opteron 63xx class CPU
stepping	: 0
microcode	: 0x1000065
cpu MHz		: 3515.548
cache size	: 512 KB
physical id	: 0
siblings	: 1
core id		: 0
cpu cores	: 1
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx pdpe1gb lm rep_good nopl extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c hypervisor lahf_lm svm abm sse4a misalignsse 3dnowprefetch xop fma4 tbm arat
bogomips	: 7031.09
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

Update

As part of the discussion in the comments about whether this has 4 or 8 physical cores, here is the lstopo output:

lstopo

9 Comments

Filed under Uncategorized