Tag Archives: libguestfs

libguestfs-based vulnerability scanner

Described here: https://www.redhat.com/archives/libguestfs/2016-August/msg00229.html

Leave a comment

Filed under Uncategorized

Libguestfs appliance boot in under 600ms

$ ./run ./utils/boot-benchmark/boot-benchmark
Warming up the libguestfs cache ...
Running the tests ...

test version: libguestfs 1.33.28
 test passes: 10
host version: Linux moo.home.annexia.org 4.4.4-301.fc23.x86_64 #1 SMP Fri Mar 4 17:42:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    host CPU: Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
     backend: direct               [to change set $LIBGUESTFS_BACKEND]
        qemu: /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 [to change set $LIBGUESTFS_HV]
qemu version: QEMU emulator version 2.5.94, Copyright (c) 2003-2008 Fabrice Bellard
         smp: 1                    [to change use --smp option]
     memsize: 500                  [to change use --memsize option]
      append:                      [to change use --append option]

Result: 575.9ms ±5.3ms

There are various tricks here:

  1. I’m using the (still!) not upstream qemu DMA patches.
  2. I’ve compiled my own very minimal guest Linux kernel.
  3. I’m using my nearly upstream "crypto: Add a flag allowing the self-tests to be disabled at runtime." patch.
  4. I’ve got two sets of non-upstream libguestfs patches 1, 2
  5. I am not using libvirt, but if you do want to use libvirt, make sure you use the very latest version since it contains an important performance patch.

Previously

4 Comments

Filed under Uncategorized

libguestfs appliance boot in under 1s

$ time LIBGUESTFS_BACKEND=direct LIBGUESTFS_HV=~/d/qemu/x86_64-softmmu/qemu-system-x86_64 guestfish -a /dev/null run

real	0m0.966s
user	0m0.623s
sys	0m0.281s

However I had to patch qemu to enable DMA loading of the kernel and initrd.

1 Comment

Filed under Uncategorized

Getting the libguestfs appliance boot time down to 1.2s

libguestfs can securely mount any disk image, but to do this it requires a small appliance to be run. The appliance is a very cut down Linux distro, but it still takes time to boot. For a while that time has floated around 3-5 seconds. This excludes libguestfs from some important use cases — one being the ability to monitor 1000s of VMs every few minutes (simple maths: 1000×3 > 5×60, so you cannot monitor 1000 VMs every 5 minutes without using a lot of parallel appliances).

Last year Intel announced Clear Containers. You may be forgiven for being confused (unclear?) by what Clear Containers actually is, but Intel’s demo is quite neat. (You can run these commands as non-root, and at time of writing they won’t damage your machine.)

$ wget https://download.clearlinux.org/demos/containers/clear-containers-demo.tar.xz
$ mv clear-containers-demo.tar.xz clear-containers-demo.tar.bz2
$ bunzip clear-containers-demo.tar.bz2
$ cd containers
$ bash ./boot.sh

It’s a complete Linux guest that boots in a fraction of a second. I take that as a challenge!

The first step is to have a good idea what all the parts are doing and what is taking the time. Booting an appliance involves several actors — qemu, BIOS, the guest kernel — and without being able to measure how much time each one spends doing things, it’s rather hard to say what needs work or if we’re making improvements. This was why I spent last week unsuccessfully looking at QEMU tracing. I have now settled on a simpler approach which is to time boot messages. The new boot analysis program produces quite clear output:

Warming up the libguestfs cache ...                                                                              
Running the tests in 5 passes ...                                                                                
    pass 1: 798 events collected in 1347184178 ns                                                                
    pass 2: 798 events collected in 1324153548 ns                                                                
    pass 3: 798 events collected in 1342126721 ns                                                                
    pass 4: 798 events collected in 1279500931 ns                                                                
    pass 5: 798 events collected in 1317457653 ns                                                                
Analyzing the results ...                                                                                        
                                                                                                                 
0.000000s: ▲ run mean:1.321973s ±24.0ms (100.0%)                                                              
0.000065s: │ ▲ supermin:build mean:0.010523s ±0.1ms (0.8%)                                                  
           │ │                                                                                               
0.010588s: │ ▼                                                                                               
0.010612s: │ ▲ qemu:feature-detect mean:0.149075s ±4.2ms (11.3%)                                            
           │ │                                                                                               
0.159687s: │ ▼                                                                                               
           │                                                                                                   
0.161412s: │ ▲ ▲ qemu mean:1.160562s ±22.6ms (87.8%) qemu:overhead mean:0.123142s ±4.5ms (9.3%)          
           │ │ │                                                                                           
0.263153s: │ │ │ ▲ seabios mean:0.241488s ±2.8ms (18.3%)                                                
           │ │ │ │                                                                                       
0.284554s: │ │ ▼ │                                                                                       
0.284554s: │ │   │ ▲ bios:overhead mean:0.220087s ±2.8ms (16.6%)                                        
           │ │   │ │                                                                                     
0.504641s: │ │   ▼ ▼                                                                                     
0.504641s: │ │ ▲ ▲ kernel mean:0.817332s ±21.4ms (61.8%) kernel:overhead mean:0.374896s ±5.2ms (28.4%) 
           │ │ │ │                                                                                       
0.879537s: │ │ │ ▼                                                                                       
0.879537s: │ │ │ ▲ supermin:mini-initrd mean:0.086014s ±7.9ms (6.5%)                                    
           │ │ │ │                                                                                       
0.881863s: │ │ │ │ ▲ supermin: internal insmod crc32-pclmul.ko mean:0.001399s ±0.1ms (0.1%)           
           │ │ │ │ │                                                                                   
0.883262s: │ │ │ │ ▼                                                                                   
0.883262s: │ │ │ │ ▲ supermin: internal insmod crc32c-intel.ko mean:0.000226s ±0.5ms (0.0%)           
0.883488s: │ │ │ │ ▼                                                                                   
0.883488s: │ │ │ │ ▲ supermin: internal insmod crct10dif-pclmul.ko mean:0.000882s ±0.4ms (0.1%)       
0.884370s: │ │ │ │ ▼                                                                                   
0.884370s: │ │ │ │ ▲ supermin: internal insmod crc32.ko mean:0.001121s ±0.0ms (0.1%)                  
           │ │ │ │ │                                                                                   
0.885490s: │ │ │ │ ▼                                                                                   
0.885490s: │ │ │ │ ▲ supermin: internal insmod virtio.ko mean:0.001634s ±0.5ms (0.1%)                 
           │ │ │ │ │                                                                                   
0.887124s: │ │ │ │ ▼                                                                                   
0.887124s: │ │ │ │ ▲ supermin: internal insmod virtio_ring.ko mean:0.000581s ±0.7ms (0.0%)            
0.887706s: │ │ │ │ ▼                                                                                   
0.887706s: │ │ │ │ ▲ supermin: internal insmod virtio_blk.ko mean:0.001115s ±0.0ms (0.1%)             
           │ │ │ │ │                                                                                   
0.888821s: │ │ │ │ ▼                                                                                   
0.888821s: │ │ │ │ ▲ supermin: internal insmod virtio-rng.ko mean:0.000884s ±0.4ms (0.1%)             
0.889705s: │ │ │ │ ▼                                                                                   
0.889705s: │ │ │ │ ▲ supermin: internal insmod virtio_console.ko mean:0.001923s ±0.4ms (0.1%)         
           │ │ │ │ │                                                                                   
0.891627s: │ │ │ │ ▼                                                                                   
0.891627s: │ │ │ │ ▲ supermin: internal insmod virtio_net.ko mean:0.001483s ±0.4ms (0.1%)             
           │ │ │ │ │                                                                                   
0.893111s: │ │ │ │ ▼                                                                                   
0.893111s: │ │ │ │ ▲ supermin: internal insmod virtio_scsi.ko mean:0.000686s ±0.6ms (0.1%)            
0.893797s: │ │ │ │ ▼                                                                                   
0.893797s: │ │ │ │ ▲ supermin: internal insmod virtio_balloon.ko mean:0.000663s ±0.5ms (0.1%)         
0.894460s: │ │ │ │ ▼                                                                                   
0.894460s: │ │ │ │ ▲ supermin: internal insmod virtio_input.ko mean:0.000875s ±0.4ms (0.1%)           
0.895336s: │ │ │ │ ▼                                                                                   
0.895336s: │ │ │ │ ▲ supermin: internal insmod virtio_mmio.ko mean:0.001097s ±0.0ms (0.1%)            
           │ │ │ │ │                                                                                   
0.896433s: │ │ │ │ ▼                                                                                   
0.896433s: │ │ │ │ ▲ supermin: internal insmod virtio_pci.ko mean:0.050700s ±7.8ms (3.8%)             
           │ │ │ │ │                                                                                   
0.947133s: │ │ │ │ ▼                                                                                   
0.947133s: │ │ │ │ ▲ supermin: internal insmod crc-ccitt.ko mean:0.001144s ±0.6ms (0.1%)              
           │ │ │ │ │                                                                                   
0.948277s: │ │ │ │ ▼                                                                                   
0.948277s: │ │ │ │ ▲ supermin: internal insmod crc-itu-t.ko mean:0.000001s ±0.0ms (0.0%)              
0.948278s: │ │ │ │ ▼                                                                                   
0.948278s: │ │ │ │ ▲ supermin: internal insmod crc8.ko mean:0.001368s ±0.3ms (0.1%)                   
           │ │ │ │ │                                                                                   
0.949646s: │ │ │ │ ▼                                                                                   
0.949646s: │ │ │ │ ▲ supermin: internal insmod libcrc32c.ko mean:0.001043s ±0.9ms (0.1%)              
           │ │ │ │ │                                                                                   
0.950689s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
0.965551s: │ │ │ ▼                                                                                       
0.965551s: │ │ │ ▲ ▲ /init mean:0.318045s ±18.0ms (24.1%) bash:overhead mean:0.015855s ±3.1ms (1.2%) 
           │ │ │ │ │                                                                                   
0.981407s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
1.283597s: │ │ │ ▼                                                                                       
1.283597s: │ │ │ ▲ guestfsd mean:0.019151s ±1.9ms (1.4%)                                                
           │ │ │ │                                                                                       
1.294818s: │ │ │ │ ▲ shutdown mean:0.027156s ±4.1ms (2.1%)                                            
           │ │ │ │ │                                                                                   
1.302747s: │ │ │ ▼ │                                                                                   
           │ │ │   │                                                                                     
1.321973s: │ │ ▼   │                                                                                     
1.321973s: ▼ ▼     ▼                                                                                       

Armed with this analysis I made a good start on reducing the boot time. It’s now down to 1.2s (on my laptop) and there is scope for sub-second boots.

Some of the things I’ve changed to get to 1.2s:

Some of the things that may reduce boot times further:

  • Stop SeaBIOS from probing the entire PCI space looking for a boot device it will never use.
  • Implement DAX so that the appliance can execute files directly from backing disk instead of loading them into RAM.
  • A much more detailed look at the qemu and kernel startup process, taking a knife to anything that unnecessarily sleeps or wastes time.

By the way: Even if you never use libguestfs, but you do use virtualized linux, this benefits you too.

4 Comments

Filed under Uncategorized

Tracing QEMU guest execution part 4

The final two questions that I posed last time were to do with constructing a timeline of what this guest is spending time on.

We can easily see system calls in the trace log, and we can also see when a kernel function is entered the first time (indicating that a new bit of the kernel is now running), and I wrote a Perl script to analyze that. That gave me a 115K line log file from which I did the rest of the analysis by hand to generate a timeline.

I would reproduce it here, but the results aren’t very enlightening. In particular I doubt it’s more interesting that what you can get by reading the kernel printk’s from a boot log.

What is my conclusion after using these techniques to analyze a booting guest? Here I go:

  • It’s clunky and undocumented. Hopefully this series should help a little.
  • It would be much more powerful with stack traces. It should be possible to get them from QEMU, at least in theory, but it’s a lot of work to do so.
  • It would be much more powerful if we could analyze into kernel modules and userspace.
  • More tooling around this might make it more bearable.

2 Comments

Filed under Uncategorized

Tracing QEMU guest execution part 3

In the previous post I posed three questions about my detailed function-level trace of the kernel booting under QEMU. The first one was Which kernel functions consume the most time?

We don’t have stack traces, although that would be a good direction for future work. So if a function “A” calls another function “B” like this:

A
  --> calls B
  <-- B returns
A

then we’re going to assign just the time in the two parts of “A” to “A”. In other words, “A” doesn’t inherit the time taken running “B”. (Except if “B” is inlined, in which case the trace cannot distinguish which code is in “A” and which is in “B”).

The other limitation is lack of insight into what kernel modules are doing. This is very difficult to solve: compiling kernel modules into the kernel proper would change what I’m trying to measure considerably.

Given those limitations, I wrote a Perl script to do all that from my previous trace. I’m not going to bother reproducing the script here because it’s quite simple, but also very specific to the trace format I’m using. If you got this far, you’ll have no trouble writing your own analysis tools.

The results are at the end of the post. For each function that was called, I had a look into the kernel code to find out what it seems to be doing, and those notes are below.

  • sha256_transform is the SHA256 function. While I’m not sure what it is being used for (some kind of module signature check seems to be the most likely explanation), the more interesting thing is that we’re not using any specialized version of the function (eg. with AVX suppport). That’s easily explained: we’re using TCG, not KVM, so no host processor features are available. However on a real boot we would be using AVX, so the function should take a lot less time, and I think we can discount it.
  • native_safe_halt is the function which halts the processor when it is idle (eg. waiting for an interrupt). Is it worrying that we spend 1/50th of the time not working? Would it help to have more virtual CPUs or is there an inherent lack of parallelism in the boot process?
  • It’s interesting that we spend such a large amount of time in the sort function. It’s used all over the place, eg. for sorting the BIOS E820 map, sorting memory mappings, checking boot parameters, sorting lists of wake-up events …
  • task_tick_fair is part of the Completely Fair Scheduler.
  • If there’s a boot option to disable ftrace, I cannot find it.

My main conclusion is there is no “smoking gun” here. Everything generally points to things that a kernel (or at least, the core, non-module parts of a kernel) should be doing. So this analysis has not been very helpful.

Continue reading

1 Comment

Filed under Uncategorized

Tracing QEMU guest execution part 2

After an overnight 12+ hour run of my Perl scripts I now have a 52 million line file that consists of timestamps, kernel symbols, and other untranslated linear addresses. The only possible way to analyze this will be with yet more scripts, but already there are lots of interesting things.

Here is the kernel code entering userspace:

11409434.8589973 prepare_exit_to_usermode in section .text
11409435.5189973 retint_user + 8 in section .text
11409436.4899973 7fb95ab3cd20
11409447.6899973 7fb95ab4c930
11409469.2169973 7fb95ab577f0

Userspace symbols cannot be decoded because we don’t know which process is being run. More importantly, code in kernel modules cannot be decoded, so we only see core kernel functions.

Handling a timer interrupt:

18000723.5261105 apic_timer_interrupt in section .text
18000725.2241105 smp_apic_timer_interrupt in section .text
18000726.7681105 native_apic_mem_write in section .text
18000729.2691105 smp_apic_timer_interrupt + 46 in section .text
18000729.8361105 irq_enter in section .text
18000730.3941105 rcu_irq_enter in section .text
18000731.1401105 rcu_irq_enter + 92 in section .text
18000731.7111105 irq_enter + 14 in section .text
18000732.3171105 smp_apic_timer_interrupt + 51 in section .text
18000740.9941105 exit_idle in section .text
18000741.5481105 smp_apic_timer_interrupt + 56 in section .text
18000742.0881105 local_apic_timer_interrupt in section .text
18000744.0341105 tick_handle_periodic in section .text
18000744.6341105 _raw_spin_lock in section .text
18000745.9291105 tick_periodic + 67 in section .text
18000747.3941105 do_timer in section .text

Userspace loading a kernel module:

7806760.57896065 40087d
7806765.09696065 4442b0
7806765.65396065 entry_SYSCALL_64 in section .text
7806766.14496065 entry_SYSCALL_64 + 32 in section .text
7806766.46296065 entry_SYSCALL_64 + 36 in section .text
7806788.75296065 sys_init_module in section .text
7806796.76296065 sys_init_module + 62 in section .text
7806797.28296065 sys_init_module + 62 in section .text
7806800.64896065 sys_init_module + 65 in section .text
7806801.94496065 capable in section .text
7806802.91196065 security_capable in section .text
7806804.30796065 cap_capable in section .text
7806804.87796065 security_capable + 72 in section .text
7806805.43596065 ns_capable + 41 in section .text
7806805.92096065 capable + 23 in section .text
7806810.46796065 sys_init_module + 75 in section .text
7806815.59796065 sys_init_module + 86 in section .text
7806821.10196065 sys_init_module + 96 in section .text
7806827.28496065 sys_init_module + 109 in section .text
7806831.23396065 sys_init_module + 129 in section .text
7806839.75396065 security_kernel_module_from_file in section .text
[etc]

What am I interested in knowing? My overall goal is to find areas in the kernel and userspace that we can optimize to make boot faster. Specifically it seems interesting at first to look at two questions:

  1. Which kernel functions consume the most time? This is “profiling 101”. It may indicate kernel functions which are particularly slow (eg. they contain sleeps).
  2. The time elapsed before each kernel function is called the first time. Because studying the whole trace is impossible (by hand), finding the first time each kernel function is called may give me an insight into major timeline events — eg. the first time one of the ext4 mount functions is called, we know that userspace has got to the point where it mounts the root filesystem.
  3. When does userspace make system calls? This is related to the previous point. It’s also the easiest type of analysis to do because system call entry is easy to detect from the trace.

More to follow …

2 Comments

Filed under Uncategorized