Category Archives: Uncategorized

Libreboot on my X60s

Almost completely free as in freedom laptop on top, with very non-free laptop underneath:

laptop

Only a few notes about this, except the obvious which is you must have an old Thinkpad X60s which you’re prepared to risk bricking:

  1. The instructions for installing libreboot are incredibly strange, contradictory, and out of date.
  2. I used: https://en.wikibooks.org/wiki/Libreboot/ThinkPad_X60
  3. I used these old sources which roughly correspond to the above instructions: https://libreboot.org/release/20140711/
  4. You need to have Debian installed on the laptop.
  5. To make it completely free, I will need to dismantle the laptop and replace the wifi card.

2 Comments

Filed under Uncategorized

ioport

22:59 < rjones> well, you know, I actually want some
enhancements to /dev/port (on x86)
23:00 < rjones> for example, it's not possible to
specify if you want byte/word/.. access
23:00 < rjones> which of course means its not possible
to access certain devices which are expecting non-byte-
sized reads or writes ..
23:01 < rjones> somehow no one has added these
important ioctls to /dev/port in the 20+ years its been
around
23:01 < rjones> I suspect because no one has realized
the importance of writing device drivers
in shell script
23:06 < dmick> where's that vomit emoticon when you
need it

1 Comment

Filed under Uncategorized

Tianocore (UEFI) now has a free FAT driver

Tianocore, the basis for many UEFI firmware implementations, has long been nearly free software. Low level hardware initialization is provided by CPU and motherboard manufacturers as binary blobs, but this part doesn’t matter for virtualization where we don’t need these blobs.

The main hindrance to shipping Tianocore in Linux distros was the FAT driver. UEFI standardized on FAT as a format for the boot partition. Microsoft supplied the corresponding FAT driver in Tianocore, but with a terms of use restriction that meant it was not free software. Anyway, today that changed. Microsoft has relicensed the code without the use restriction. The code is available here. So yes, thanks Microsoft. Also Intel who were involved in this.

7 Comments

Filed under Uncategorized

libguestfs appliance boot in under 1s

$ time LIBGUESTFS_BACKEND=direct LIBGUESTFS_HV=~/d/qemu/x86_64-softmmu/qemu-system-x86_64 guestfish -a /dev/null run

real	0m0.966s
user	0m0.623s
sys	0m0.281s

However I had to patch qemu to enable DMA loading of the kernel and initrd.

1 Comment

Filed under Uncategorized

Getting the libguestfs appliance boot time down to 1.2s

libguestfs can securely mount any disk image, but to do this it requires a small appliance to be run. The appliance is a very cut down Linux distro, but it still takes time to boot. For a while that time has floated around 3-5 seconds. This excludes libguestfs from some important use cases — one being the ability to monitor 1000s of VMs every few minutes (simple maths: 1000×3 > 5×60, so you cannot monitor 1000 VMs every 5 minutes without using a lot of parallel appliances).

Last year Intel announced Clear Containers. You may be forgiven for being confused (unclear?) by what Clear Containers actually is, but Intel’s demo is quite neat. (You can run these commands as non-root, and at time of writing they won’t damage your machine.)

$ wget https://download.clearlinux.org/demos/containers/clear-containers-demo.tar.xz
$ mv clear-containers-demo.tar.xz clear-containers-demo.tar.bz2
$ bunzip clear-containers-demo.tar.bz2
$ cd containers
$ bash ./boot.sh

It’s a complete Linux guest that boots in a fraction of a second. I take that as a challenge!

The first step is to have a good idea what all the parts are doing and what is taking the time. Booting an appliance involves several actors — qemu, BIOS, the guest kernel — and without being able to measure how much time each one spends doing things, it’s rather hard to say what needs work or if we’re making improvements. This was why I spent last week unsuccessfully looking at QEMU tracing. I have now settled on a simpler approach which is to time boot messages. The new boot analysis program produces quite clear output:

Warming up the libguestfs cache ...                                                                              
Running the tests in 5 passes ...                                                                                
    pass 1: 798 events collected in 1347184178 ns                                                                
    pass 2: 798 events collected in 1324153548 ns                                                                
    pass 3: 798 events collected in 1342126721 ns                                                                
    pass 4: 798 events collected in 1279500931 ns                                                                
    pass 5: 798 events collected in 1317457653 ns                                                                
Analyzing the results ...                                                                                        
                                                                                                                 
0.000000s: ▲ run mean:1.321973s ±24.0ms (100.0%)                                                              
0.000065s: │ ▲ supermin:build mean:0.010523s ±0.1ms (0.8%)                                                  
           │ │                                                                                               
0.010588s: │ ▼                                                                                               
0.010612s: │ ▲ qemu:feature-detect mean:0.149075s ±4.2ms (11.3%)                                            
           │ │                                                                                               
0.159687s: │ ▼                                                                                               
           │                                                                                                   
0.161412s: │ ▲ ▲ qemu mean:1.160562s ±22.6ms (87.8%) qemu:overhead mean:0.123142s ±4.5ms (9.3%)          
           │ │ │                                                                                           
0.263153s: │ │ │ ▲ seabios mean:0.241488s ±2.8ms (18.3%)                                                
           │ │ │ │                                                                                       
0.284554s: │ │ ▼ │                                                                                       
0.284554s: │ │   │ ▲ bios:overhead mean:0.220087s ±2.8ms (16.6%)                                        
           │ │   │ │                                                                                     
0.504641s: │ │   ▼ ▼                                                                                     
0.504641s: │ │ ▲ ▲ kernel mean:0.817332s ±21.4ms (61.8%) kernel:overhead mean:0.374896s ±5.2ms (28.4%) 
           │ │ │ │                                                                                       
0.879537s: │ │ │ ▼                                                                                       
0.879537s: │ │ │ ▲ supermin:mini-initrd mean:0.086014s ±7.9ms (6.5%)                                    
           │ │ │ │                                                                                       
0.881863s: │ │ │ │ ▲ supermin: internal insmod crc32-pclmul.ko mean:0.001399s ±0.1ms (0.1%)           
           │ │ │ │ │                                                                                   
0.883262s: │ │ │ │ ▼                                                                                   
0.883262s: │ │ │ │ ▲ supermin: internal insmod crc32c-intel.ko mean:0.000226s ±0.5ms (0.0%)           
0.883488s: │ │ │ │ ▼                                                                                   
0.883488s: │ │ │ │ ▲ supermin: internal insmod crct10dif-pclmul.ko mean:0.000882s ±0.4ms (0.1%)       
0.884370s: │ │ │ │ ▼                                                                                   
0.884370s: │ │ │ │ ▲ supermin: internal insmod crc32.ko mean:0.001121s ±0.0ms (0.1%)                  
           │ │ │ │ │                                                                                   
0.885490s: │ │ │ │ ▼                                                                                   
0.885490s: │ │ │ │ ▲ supermin: internal insmod virtio.ko mean:0.001634s ±0.5ms (0.1%)                 
           │ │ │ │ │                                                                                   
0.887124s: │ │ │ │ ▼                                                                                   
0.887124s: │ │ │ │ ▲ supermin: internal insmod virtio_ring.ko mean:0.000581s ±0.7ms (0.0%)            
0.887706s: │ │ │ │ ▼                                                                                   
0.887706s: │ │ │ │ ▲ supermin: internal insmod virtio_blk.ko mean:0.001115s ±0.0ms (0.1%)             
           │ │ │ │ │                                                                                   
0.888821s: │ │ │ │ ▼                                                                                   
0.888821s: │ │ │ │ ▲ supermin: internal insmod virtio-rng.ko mean:0.000884s ±0.4ms (0.1%)             
0.889705s: │ │ │ │ ▼                                                                                   
0.889705s: │ │ │ │ ▲ supermin: internal insmod virtio_console.ko mean:0.001923s ±0.4ms (0.1%)         
           │ │ │ │ │                                                                                   
0.891627s: │ │ │ │ ▼                                                                                   
0.891627s: │ │ │ │ ▲ supermin: internal insmod virtio_net.ko mean:0.001483s ±0.4ms (0.1%)             
           │ │ │ │ │                                                                                   
0.893111s: │ │ │ │ ▼                                                                                   
0.893111s: │ │ │ │ ▲ supermin: internal insmod virtio_scsi.ko mean:0.000686s ±0.6ms (0.1%)            
0.893797s: │ │ │ │ ▼                                                                                   
0.893797s: │ │ │ │ ▲ supermin: internal insmod virtio_balloon.ko mean:0.000663s ±0.5ms (0.1%)         
0.894460s: │ │ │ │ ▼                                                                                   
0.894460s: │ │ │ │ ▲ supermin: internal insmod virtio_input.ko mean:0.000875s ±0.4ms (0.1%)           
0.895336s: │ │ │ │ ▼                                                                                   
0.895336s: │ │ │ │ ▲ supermin: internal insmod virtio_mmio.ko mean:0.001097s ±0.0ms (0.1%)            
           │ │ │ │ │                                                                                   
0.896433s: │ │ │ │ ▼                                                                                   
0.896433s: │ │ │ │ ▲ supermin: internal insmod virtio_pci.ko mean:0.050700s ±7.8ms (3.8%)             
           │ │ │ │ │                                                                                   
0.947133s: │ │ │ │ ▼                                                                                   
0.947133s: │ │ │ │ ▲ supermin: internal insmod crc-ccitt.ko mean:0.001144s ±0.6ms (0.1%)              
           │ │ │ │ │                                                                                   
0.948277s: │ │ │ │ ▼                                                                                   
0.948277s: │ │ │ │ ▲ supermin: internal insmod crc-itu-t.ko mean:0.000001s ±0.0ms (0.0%)              
0.948278s: │ │ │ │ ▼                                                                                   
0.948278s: │ │ │ │ ▲ supermin: internal insmod crc8.ko mean:0.001368s ±0.3ms (0.1%)                   
           │ │ │ │ │                                                                                   
0.949646s: │ │ │ │ ▼                                                                                   
0.949646s: │ │ │ │ ▲ supermin: internal insmod libcrc32c.ko mean:0.001043s ±0.9ms (0.1%)              
           │ │ │ │ │                                                                                   
0.950689s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
0.965551s: │ │ │ ▼                                                                                       
0.965551s: │ │ │ ▲ ▲ /init mean:0.318045s ±18.0ms (24.1%) bash:overhead mean:0.015855s ±3.1ms (1.2%) 
           │ │ │ │ │                                                                                   
0.981407s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
1.283597s: │ │ │ ▼                                                                                       
1.283597s: │ │ │ ▲ guestfsd mean:0.019151s ±1.9ms (1.4%)                                                
           │ │ │ │                                                                                       
1.294818s: │ │ │ │ ▲ shutdown mean:0.027156s ±4.1ms (2.1%)                                            
           │ │ │ │ │                                                                                   
1.302747s: │ │ │ ▼ │                                                                                   
           │ │ │   │                                                                                     
1.321973s: │ │ ▼   │                                                                                     
1.321973s: ▼ ▼     ▼                                                                                       

Armed with this analysis I made a good start on reducing the boot time. It’s now down to 1.2s (on my laptop) and there is scope for sub-second boots.

Some of the things I’ve changed to get to 1.2s:

Some of the things that may reduce boot times further:

  • Stop SeaBIOS from probing the entire PCI space looking for a boot device it will never use.
  • Implement DAX so that the appliance can execute files directly from backing disk instead of loading them into RAM.
  • A much more detailed look at the qemu and kernel startup process, taking a knife to anything that unnecessarily sleeps or wastes time.

By the way: Even if you never use libguestfs, but you do use virtualized linux, this benefits you too.

4 Comments

Filed under Uncategorized

Tracing QEMU guest execution part 4

The final two questions that I posed last time were to do with constructing a timeline of what this guest is spending time on.

We can easily see system calls in the trace log, and we can also see when a kernel function is entered the first time (indicating that a new bit of the kernel is now running), and I wrote a Perl script to analyze that. That gave me a 115K line log file from which I did the rest of the analysis by hand to generate a timeline.

I would reproduce it here, but the results aren’t very enlightening. In particular I doubt it’s more interesting that what you can get by reading the kernel printk’s from a boot log.

What is my conclusion after using these techniques to analyze a booting guest? Here I go:

  • It’s clunky and undocumented. Hopefully this series should help a little.
  • It would be much more powerful with stack traces. It should be possible to get them from QEMU, at least in theory, but it’s a lot of work to do so.
  • It would be much more powerful if we could analyze into kernel modules and userspace.
  • More tooling around this might make it more bearable.

2 Comments

Filed under Uncategorized

Tracing QEMU guest execution part 3

In the previous post I posed three questions about my detailed function-level trace of the kernel booting under QEMU. The first one was Which kernel functions consume the most time?

We don’t have stack traces, although that would be a good direction for future work. So if a function “A” calls another function “B” like this:

A
  --> calls B
  <-- B returns
A

then we’re going to assign just the time in the two parts of “A” to “A”. In other words, “A” doesn’t inherit the time taken running “B”. (Except if “B” is inlined, in which case the trace cannot distinguish which code is in “A” and which is in “B”).

The other limitation is lack of insight into what kernel modules are doing. This is very difficult to solve: compiling kernel modules into the kernel proper would change what I’m trying to measure considerably.

Given those limitations, I wrote a Perl script to do all that from my previous trace. I’m not going to bother reproducing the script here because it’s quite simple, but also very specific to the trace format I’m using. If you got this far, you’ll have no trouble writing your own analysis tools.

The results are at the end of the post. For each function that was called, I had a look into the kernel code to find out what it seems to be doing, and those notes are below.

  • sha256_transform is the SHA256 function. While I’m not sure what it is being used for (some kind of module signature check seems to be the most likely explanation), the more interesting thing is that we’re not using any specialized version of the function (eg. with AVX suppport). That’s easily explained: we’re using TCG, not KVM, so no host processor features are available. However on a real boot we would be using AVX, so the function should take a lot less time, and I think we can discount it.
  • native_safe_halt is the function which halts the processor when it is idle (eg. waiting for an interrupt). Is it worrying that we spend 1/50th of the time not working? Would it help to have more virtual CPUs or is there an inherent lack of parallelism in the boot process?
  • It’s interesting that we spend such a large amount of time in the sort function. It’s used all over the place, eg. for sorting the BIOS E820 map, sorting memory mappings, checking boot parameters, sorting lists of wake-up events …
  • task_tick_fair is part of the Completely Fair Scheduler.
  • If there’s a boot option to disable ftrace, I cannot find it.

My main conclusion is there is no “smoking gun” here. Everything generally points to things that a kernel (or at least, the core, non-module parts of a kernel) should be doing. So this analysis has not been very helpful.

Continue reading

1 Comment

Filed under Uncategorized