Monthly Archives: March 2016

Tianocore (UEFI) now has a free FAT driver

Tianocore, the basis for many UEFI firmware implementations, has long been nearly free software. Low level hardware initialization is provided by CPU and motherboard manufacturers as binary blobs, but this part doesn’t matter for virtualization where we don’t need these blobs.

The main hindrance to shipping Tianocore in Linux distros was the FAT driver. UEFI standardized on FAT as a format for the boot partition. Microsoft supplied the corresponding FAT driver in Tianocore, but with a terms of use restriction that meant it was not free software. Anyway, today that changed. Microsoft has relicensed the code without the use restriction. The code is available here. So yes, thanks Microsoft. Also Intel who were involved in this.

7 Comments

Filed under Uncategorized

libguestfs appliance boot in under 1s

$ time LIBGUESTFS_BACKEND=direct LIBGUESTFS_HV=~/d/qemu/x86_64-softmmu/qemu-system-x86_64 guestfish -a /dev/null run

real	0m0.966s
user	0m0.623s
sys	0m0.281s

However I had to patch qemu to enable DMA loading of the kernel and initrd.

1 Comment

Filed under Uncategorized

Getting the libguestfs appliance boot time down to 1.2s

libguestfs can securely mount any disk image, but to do this it requires a small appliance to be run. The appliance is a very cut down Linux distro, but it still takes time to boot. For a while that time has floated around 3-5 seconds. This excludes libguestfs from some important use cases — one being the ability to monitor 1000s of VMs every few minutes (simple maths: 1000×3 > 5×60, so you cannot monitor 1000 VMs every 5 minutes without using a lot of parallel appliances).

Last year Intel announced Clear Containers. You may be forgiven for being confused (unclear?) by what Clear Containers actually is, but Intel’s demo is quite neat. (You can run these commands as non-root, and at time of writing they won’t damage your machine.)

$ wget https://download.clearlinux.org/demos/containers/clear-containers-demo.tar.xz
$ mv clear-containers-demo.tar.xz clear-containers-demo.tar.bz2
$ bunzip clear-containers-demo.tar.bz2
$ cd containers
$ bash ./boot.sh

It’s a complete Linux guest that boots in a fraction of a second. I take that as a challenge!

The first step is to have a good idea what all the parts are doing and what is taking the time. Booting an appliance involves several actors — qemu, BIOS, the guest kernel — and without being able to measure how much time each one spends doing things, it’s rather hard to say what needs work or if we’re making improvements. This was why I spent last week unsuccessfully looking at QEMU tracing. I have now settled on a simpler approach which is to time boot messages. The new boot analysis program produces quite clear output:

Warming up the libguestfs cache ...                                                                              
Running the tests in 5 passes ...                                                                                
    pass 1: 798 events collected in 1347184178 ns                                                                
    pass 2: 798 events collected in 1324153548 ns                                                                
    pass 3: 798 events collected in 1342126721 ns                                                                
    pass 4: 798 events collected in 1279500931 ns                                                                
    pass 5: 798 events collected in 1317457653 ns                                                                
Analyzing the results ...                                                                                        
                                                                                                                 
0.000000s: ▲ run mean:1.321973s ±24.0ms (100.0%)                                                              
0.000065s: │ ▲ supermin:build mean:0.010523s ±0.1ms (0.8%)                                                  
           │ │                                                                                               
0.010588s: │ ▼                                                                                               
0.010612s: │ ▲ qemu:feature-detect mean:0.149075s ±4.2ms (11.3%)                                            
           │ │                                                                                               
0.159687s: │ ▼                                                                                               
           │                                                                                                   
0.161412s: │ ▲ ▲ qemu mean:1.160562s ±22.6ms (87.8%) qemu:overhead mean:0.123142s ±4.5ms (9.3%)          
           │ │ │                                                                                           
0.263153s: │ │ │ ▲ seabios mean:0.241488s ±2.8ms (18.3%)                                                
           │ │ │ │                                                                                       
0.284554s: │ │ ▼ │                                                                                       
0.284554s: │ │   │ ▲ bios:overhead mean:0.220087s ±2.8ms (16.6%)                                        
           │ │   │ │                                                                                     
0.504641s: │ │   ▼ ▼                                                                                     
0.504641s: │ │ ▲ ▲ kernel mean:0.817332s ±21.4ms (61.8%) kernel:overhead mean:0.374896s ±5.2ms (28.4%) 
           │ │ │ │                                                                                       
0.879537s: │ │ │ ▼                                                                                       
0.879537s: │ │ │ ▲ supermin:mini-initrd mean:0.086014s ±7.9ms (6.5%)                                    
           │ │ │ │                                                                                       
0.881863s: │ │ │ │ ▲ supermin: internal insmod crc32-pclmul.ko mean:0.001399s ±0.1ms (0.1%)           
           │ │ │ │ │                                                                                   
0.883262s: │ │ │ │ ▼                                                                                   
0.883262s: │ │ │ │ ▲ supermin: internal insmod crc32c-intel.ko mean:0.000226s ±0.5ms (0.0%)           
0.883488s: │ │ │ │ ▼                                                                                   
0.883488s: │ │ │ │ ▲ supermin: internal insmod crct10dif-pclmul.ko mean:0.000882s ±0.4ms (0.1%)       
0.884370s: │ │ │ │ ▼                                                                                   
0.884370s: │ │ │ │ ▲ supermin: internal insmod crc32.ko mean:0.001121s ±0.0ms (0.1%)                  
           │ │ │ │ │                                                                                   
0.885490s: │ │ │ │ ▼                                                                                   
0.885490s: │ │ │ │ ▲ supermin: internal insmod virtio.ko mean:0.001634s ±0.5ms (0.1%)                 
           │ │ │ │ │                                                                                   
0.887124s: │ │ │ │ ▼                                                                                   
0.887124s: │ │ │ │ ▲ supermin: internal insmod virtio_ring.ko mean:0.000581s ±0.7ms (0.0%)            
0.887706s: │ │ │ │ ▼                                                                                   
0.887706s: │ │ │ │ ▲ supermin: internal insmod virtio_blk.ko mean:0.001115s ±0.0ms (0.1%)             
           │ │ │ │ │                                                                                   
0.888821s: │ │ │ │ ▼                                                                                   
0.888821s: │ │ │ │ ▲ supermin: internal insmod virtio-rng.ko mean:0.000884s ±0.4ms (0.1%)             
0.889705s: │ │ │ │ ▼                                                                                   
0.889705s: │ │ │ │ ▲ supermin: internal insmod virtio_console.ko mean:0.001923s ±0.4ms (0.1%)         
           │ │ │ │ │                                                                                   
0.891627s: │ │ │ │ ▼                                                                                   
0.891627s: │ │ │ │ ▲ supermin: internal insmod virtio_net.ko mean:0.001483s ±0.4ms (0.1%)             
           │ │ │ │ │                                                                                   
0.893111s: │ │ │ │ ▼                                                                                   
0.893111s: │ │ │ │ ▲ supermin: internal insmod virtio_scsi.ko mean:0.000686s ±0.6ms (0.1%)            
0.893797s: │ │ │ │ ▼                                                                                   
0.893797s: │ │ │ │ ▲ supermin: internal insmod virtio_balloon.ko mean:0.000663s ±0.5ms (0.1%)         
0.894460s: │ │ │ │ ▼                                                                                   
0.894460s: │ │ │ │ ▲ supermin: internal insmod virtio_input.ko mean:0.000875s ±0.4ms (0.1%)           
0.895336s: │ │ │ │ ▼                                                                                   
0.895336s: │ │ │ │ ▲ supermin: internal insmod virtio_mmio.ko mean:0.001097s ±0.0ms (0.1%)            
           │ │ │ │ │                                                                                   
0.896433s: │ │ │ │ ▼                                                                                   
0.896433s: │ │ │ │ ▲ supermin: internal insmod virtio_pci.ko mean:0.050700s ±7.8ms (3.8%)             
           │ │ │ │ │                                                                                   
0.947133s: │ │ │ │ ▼                                                                                   
0.947133s: │ │ │ │ ▲ supermin: internal insmod crc-ccitt.ko mean:0.001144s ±0.6ms (0.1%)              
           │ │ │ │ │                                                                                   
0.948277s: │ │ │ │ ▼                                                                                   
0.948277s: │ │ │ │ ▲ supermin: internal insmod crc-itu-t.ko mean:0.000001s ±0.0ms (0.0%)              
0.948278s: │ │ │ │ ▼                                                                                   
0.948278s: │ │ │ │ ▲ supermin: internal insmod crc8.ko mean:0.001368s ±0.3ms (0.1%)                   
           │ │ │ │ │                                                                                   
0.949646s: │ │ │ │ ▼                                                                                   
0.949646s: │ │ │ │ ▲ supermin: internal insmod libcrc32c.ko mean:0.001043s ±0.9ms (0.1%)              
           │ │ │ │ │                                                                                   
0.950689s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
0.965551s: │ │ │ ▼                                                                                       
0.965551s: │ │ │ ▲ ▲ /init mean:0.318045s ±18.0ms (24.1%) bash:overhead mean:0.015855s ±3.1ms (1.2%) 
           │ │ │ │ │                                                                                   
0.981407s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
1.283597s: │ │ │ ▼                                                                                       
1.283597s: │ │ │ ▲ guestfsd mean:0.019151s ±1.9ms (1.4%)                                                
           │ │ │ │                                                                                       
1.294818s: │ │ │ │ ▲ shutdown mean:0.027156s ±4.1ms (2.1%)                                            
           │ │ │ │ │                                                                                   
1.302747s: │ │ │ ▼ │                                                                                   
           │ │ │   │                                                                                     
1.321973s: │ │ ▼   │                                                                                     
1.321973s: ▼ ▼     ▼                                                                                       

Armed with this analysis I made a good start on reducing the boot time. It’s now down to 1.2s (on my laptop) and there is scope for sub-second boots.

Some of the things I’ve changed to get to 1.2s:

Some of the things that may reduce boot times further:

  • Stop SeaBIOS from probing the entire PCI space looking for a boot device it will never use.
  • Implement DAX so that the appliance can execute files directly from backing disk instead of loading them into RAM.
  • A much more detailed look at the qemu and kernel startup process, taking a knife to anything that unnecessarily sleeps or wastes time.

By the way: Even if you never use libguestfs, but you do use virtualized linux, this benefits you too.

4 Comments

Filed under Uncategorized

Tracing QEMU guest execution part 4

The final two questions that I posed last time were to do with constructing a timeline of what this guest is spending time on.

We can easily see system calls in the trace log, and we can also see when a kernel function is entered the first time (indicating that a new bit of the kernel is now running), and I wrote a Perl script to analyze that. That gave me a 115K line log file from which I did the rest of the analysis by hand to generate a timeline.

I would reproduce it here, but the results aren’t very enlightening. In particular I doubt it’s more interesting that what you can get by reading the kernel printk’s from a boot log.

What is my conclusion after using these techniques to analyze a booting guest? Here I go:

  • It’s clunky and undocumented. Hopefully this series should help a little.
  • It would be much more powerful with stack traces. It should be possible to get them from QEMU, at least in theory, but it’s a lot of work to do so.
  • It would be much more powerful if we could analyze into kernel modules and userspace.
  • More tooling around this might make it more bearable.

4 Comments

Filed under Uncategorized

Tracing QEMU guest execution part 3

In the previous post I posed three questions about my detailed function-level trace of the kernel booting under QEMU. The first one was Which kernel functions consume the most time?

We don’t have stack traces, although that would be a good direction for future work. So if a function “A” calls another function “B” like this:

A
  --> calls B
  <-- B returns
A

then we’re going to assign just the time in the two parts of “A” to “A”. In other words, “A” doesn’t inherit the time taken running “B”. (Except if “B” is inlined, in which case the trace cannot distinguish which code is in “A” and which is in “B”).

The other limitation is lack of insight into what kernel modules are doing. This is very difficult to solve: compiling kernel modules into the kernel proper would change what I’m trying to measure considerably.

Given those limitations, I wrote a Perl script to do all that from my previous trace. I’m not going to bother reproducing the script here because it’s quite simple, but also very specific to the trace format I’m using. If you got this far, you’ll have no trouble writing your own analysis tools.

The results are at the end of the post. For each function that was called, I had a look into the kernel code to find out what it seems to be doing, and those notes are below.

  • sha256_transform is the SHA256 function. While I’m not sure what it is being used for (some kind of module signature check seems to be the most likely explanation), the more interesting thing is that we’re not using any specialized version of the function (eg. with AVX suppport). That’s easily explained: we’re using TCG, not KVM, so no host processor features are available. However on a real boot we would be using AVX, so the function should take a lot less time, and I think we can discount it.
  • native_safe_halt is the function which halts the processor when it is idle (eg. waiting for an interrupt). Is it worrying that we spend 1/50th of the time not working? Would it help to have more virtual CPUs or is there an inherent lack of parallelism in the boot process?
  • It’s interesting that we spend such a large amount of time in the sort function. It’s used all over the place, eg. for sorting the BIOS E820 map, sorting memory mappings, checking boot parameters, sorting lists of wake-up events …
  • task_tick_fair is part of the Completely Fair Scheduler.
  • If there’s a boot option to disable ftrace, I cannot find it.

My main conclusion is there is no “smoking gun” here. Everything generally points to things that a kernel (or at least, the core, non-module parts of a kernel) should be doing. So this analysis has not been very helpful.

Continue reading

1 Comment

Filed under Uncategorized

Tracing QEMU guest execution part 2

After an overnight 12+ hour run of my Perl scripts I now have a 52 million line file that consists of timestamps, kernel symbols, and other untranslated linear addresses. The only possible way to analyze this will be with yet more scripts, but already there are lots of interesting things.

Here is the kernel code entering userspace:

11409434.8589973 prepare_exit_to_usermode in section .text
11409435.5189973 retint_user + 8 in section .text
11409436.4899973 7fb95ab3cd20
11409447.6899973 7fb95ab4c930
11409469.2169973 7fb95ab577f0

Userspace symbols cannot be decoded because we don’t know which process is being run. More importantly, code in kernel modules cannot be decoded, so we only see core kernel functions.

Handling a timer interrupt:

18000723.5261105 apic_timer_interrupt in section .text
18000725.2241105 smp_apic_timer_interrupt in section .text
18000726.7681105 native_apic_mem_write in section .text
18000729.2691105 smp_apic_timer_interrupt + 46 in section .text
18000729.8361105 irq_enter in section .text
18000730.3941105 rcu_irq_enter in section .text
18000731.1401105 rcu_irq_enter + 92 in section .text
18000731.7111105 irq_enter + 14 in section .text
18000732.3171105 smp_apic_timer_interrupt + 51 in section .text
18000740.9941105 exit_idle in section .text
18000741.5481105 smp_apic_timer_interrupt + 56 in section .text
18000742.0881105 local_apic_timer_interrupt in section .text
18000744.0341105 tick_handle_periodic in section .text
18000744.6341105 _raw_spin_lock in section .text
18000745.9291105 tick_periodic + 67 in section .text
18000747.3941105 do_timer in section .text

Userspace loading a kernel module:

7806760.57896065 40087d
7806765.09696065 4442b0
7806765.65396065 entry_SYSCALL_64 in section .text
7806766.14496065 entry_SYSCALL_64 + 32 in section .text
7806766.46296065 entry_SYSCALL_64 + 36 in section .text
7806788.75296065 sys_init_module in section .text
7806796.76296065 sys_init_module + 62 in section .text
7806797.28296065 sys_init_module + 62 in section .text
7806800.64896065 sys_init_module + 65 in section .text
7806801.94496065 capable in section .text
7806802.91196065 security_capable in section .text
7806804.30796065 cap_capable in section .text
7806804.87796065 security_capable + 72 in section .text
7806805.43596065 ns_capable + 41 in section .text
7806805.92096065 capable + 23 in section .text
7806810.46796065 sys_init_module + 75 in section .text
7806815.59796065 sys_init_module + 86 in section .text
7806821.10196065 sys_init_module + 96 in section .text
7806827.28496065 sys_init_module + 109 in section .text
7806831.23396065 sys_init_module + 129 in section .text
7806839.75396065 security_kernel_module_from_file in section .text
[etc]

What am I interested in knowing? My overall goal is to find areas in the kernel and userspace that we can optimize to make boot faster. Specifically it seems interesting at first to look at two questions:

  1. Which kernel functions consume the most time? This is “profiling 101”. It may indicate kernel functions which are particularly slow (eg. they contain sleeps).
  2. The time elapsed before each kernel function is called the first time. Because studying the whole trace is impossible (by hand), finding the first time each kernel function is called may give me an insight into major timeline events — eg. the first time one of the ext4 mount functions is called, we know that userspace has got to the point where it mounts the root filesystem.
  3. When does userspace make system calls? This is related to the previous point. It’s also the easiest type of analysis to do because system call entry is easy to detect from the trace.

More to follow …

2 Comments

Filed under Uncategorized

Tracing QEMU guest execution

When QEMU executes a guest using software emulation (“TCG”), it translates blocks of guest code to native code and then executes them (the TCG translation process is described in the talk “Towards multi-threaded TCG” by Alex Bennée and Frederic Konrad). If you’re interested in tracing guest code — perhaps in order to look at what code is being run or to benchmark it — it should be possible to instrument the translated blocks. And in fact it is. However I thought I’d document this since it took me ages to work out and it’s not exactly obvious how to do it.

Firstly you have to compile QEMU from source. Before compiling it, read docs/tracing.txt carefully. Also edit trace-events and remove the disable keyword from the following lines in that file:

 # TCG related tracing (mostly disabled by default)
 # cpu-exec.c
-disable exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
-disable exec_tb_nocache(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
-disable exec_tb_exit(void *next_tb, unsigned int flags) "tb:%p flags=%x"
+exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
+exec_tb_nocache(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
+exec_tb_exit(void *next_tb, unsigned int flags) "tb:%p flags=%x"

Add those trace events to your /tmp/events file. Also it’s useful to put your full qemu command line into a script, with the additional -trace events=/tmp/events parameter, so you have a way to rerun the trace.

What you end up with after you’ve done this and run your guest under the tracing test conditions is an enormous trace file. My trace file, simply from a kernel boot, was 3.9 GB.

You can now analyze the log using the scripts/simpletrace.py script, as described in the QEMU tracing documentation. Again, the output will be enormous. Mine begins like this, I’ve aligned the output to make it a bit easier to read:

$ ./scripts/simpletrace.py trace-events trace-4491 | head
exec_tb       0.000 pid=4491 tb=0x7fa869afe010 pc=0xfffffff0
exec_tb_exit  1.953 pid=4491 next_tb=0x0 flags=0x0
exec_tb      15.525 pid=4491 tb=0x7fa869afe090 pc=0xfe05b
exec_tb_exit  0.800 pid=4491 next_tb=0x7fa869afe090 flags=0x0
exec_tb       7.215 pid=4491 tb=0x7fa869afe110 pc=0xfe066
exec_tb_exit  0.234 pid=4491 next_tb=0x0 flags=0x0
exec_tb       5.940 pid=4491 tb=0x7fa869afe190 pc=0xfe06a
exec_tb_exit  0.222 pid=4491 next_tb=0x0 flags=0x0
exec_tb       2.945 pid=4491 tb=0x7fa869afe210 pc=0xfe070
exec_tb_exit  0.222 pid=4491 next_tb=0x0 flags=0x0

The pid and *tb fields are not very interesting, being the QEMU PID and the internal address of the translated blocks.

However the pc (program counter) field and the timestamp (µs delta from previous trace event) are useful: Notice that the first translated block of guest code is located at guest address 0xffff_fff0, which is the linear address where x86 CPUs boot from, and the second at 0xf_e05b (segmented address F000:E05B) which is the start of the BIOS ROM, so that’s encouraging.

Assuming you’ve now decoded the whole file (the decoded trace takes 5.5GB for me!), how can we turn these raw timestamp deltas and raw linear addresses into useful information? At this point we’re on our own, and I ended up writing Perl scripts to process and analyze the data.

The first Perl script is simple enough, and is just used to associate absolute timestamps with the program counter:

#!/usr/bin/perl -w

use warnings;
use strict;

my $ts = 0;

while (<>) {
    my $ts_delta;
    my $pc;

    if (m{^exec_tb(_nocache)? ([-\d.]+).*pc=0x([a-fA-F0-9]+)}) {
        $ts_delta = $2;
        $pc = "$3";
    }
    elsif (m{^exec_tb_exit ([-\d.]+)}) {
        $ts_delta = $1;
    }
    elsif (m{^Dropped_Event ([-\d.]+)}) {
        $ts_delta = $1;
    }
    else {
        die "could not parse output: $_"
    }
    $ts += $ts_delta;

    if (defined $pc) {
        print "$ts $pc\n";
    }
}

How do we know what program counter corresponds to what code? For this it’s helpful to know some magic numbers associated with booting PCs:

Address Meaning
0xfffffff0 Initial PC
0xfe05b BIOS ROM entry point
0x7c00 Bootloader entry point
Note: not used if you load the kernel using the -kernel option.
0x1000000 64 bit kernel entry point
This may be different for your kernel. Use readelf -e vmlinux

With that knowledge I can put together a timeline of my kernel boot by hand:

Time Stage
0 BIOS
1.16s Enter kernel
38.4s Power off

Not too interesting. The next step is to reverse the PC addresses into kernel symbols. There used to be a useful script called ksymoops to do this, but it seems to have disappeared, so I wrote my own:

#!/usr/bin/perl -w
#
# Find everything that looks like a kernel address in the input
# and turn it into a symbol using gdb.
#
# Usage:
#   ksyms.pl vmlinux < input > output
# where 'vmlinux' is the kernel image which must contain debug
# symbols (on Fedora, find this in kernel-debuginfo).

use warnings;
use strict;

my $vmlinux = shift;
my %cache = ();

while (<>) {
    s{(^|\s)([0-9a-f]{6,16})(\s|$)}{ $1 . lookup ($2) . $3 }gei;
    print
}

sub lookup
{
    local $_;
    my $addr = $_[0];

    return $cache{$addr} if exists $cache{$addr};

    # Run gdb to lookup this symbol.
    my $cmd = "gdb -batch -s '$vmlinux' -ex 'info symbol 0x$addr'";
    open PIPE, "$cmd 2>&1 |" or die "$cmd: $!";
    my $r = <PIPE>;
    close PIPE;
    chomp $r;
    if ($r =~ m/^No symbol/) {
        # No match, just return the original string, but add the original
        # string to the cache so we don't do the lookup again.
        $cache{$addr} = $addr;
        return $addr;
    }

    # Memoize the match and return it.
    $cache{$addr} = $r;
    return $r;
}

You can run it like this:

$ ksyms.pl /usr/lib/debug/lib/modules/4.4.4-301.fc23.x86_64/vmlinux times.log > symbols.log

Come back tomorrow for further analysis …

14 Comments

Filed under Uncategorized

Linaro Connect: Jon Masters talking about the importance of standards

For everyone who has ever used what I call junk ARM dev boards, and hasn’t been able to work out what combination of out-of-tree u-boot/kernel patches they should be using, or how to enable the serial port, Jon is trying to make the situation better on 64 bit ARM server hardware. Needless to say, I very much approve. Here is his talk:

Leave a comment

Filed under Uncategorized

Gigabyte MP30-AR0: Flashing UEFI

I finally got UEFI flashed onto the Gigabyte board so now it is SBSA/SBBR compliant [edit: see note at end] and will just work with RHEL. Instructions here: https://lists.centos.org/pipermail/arm-dev/2016-March/001743.html

Here are the boot messages from TianoCore:

TianoCore 1.20.03 UEFI 2.4.0 Jan 26 2016 18:09:04
CPU: APM ARM 64-bit Potenza Rev B0 2400MHz PCP 2400MHz
     32 KB ICACHE, 32 KB DCACHE
     SOC 2000MHz IOBAXI 400MHz AXI 250MHz AHB 200MHz GFC 125MHz
Board: X-Gene Mp30ar0 Board
Slimpro FW:
        Ver: 2.4 (build 01.15.01.00 2015/05/22)
        TPC: disable
        AVS: support
        PMD: 970 mV
        SOC: 950 mV
The default boot selection will start in   1 second

Note: A few people have pointed out that the Gigabyte isn’t SBSA compliant because it lacks the right serial port, RTC and WDT. However it still boots RHELSA fine if you add console=ttyS0,115200.

8 Comments

Filed under Uncategorized

New in libguestfs: Filesystem forensics support

Thanks to patches supplied by Matteo Cafasso, libguestfs, the library for accessing and modifying disk images is gradually getting support for filesystem forensics.

Initially I have added a Fedora libguestfs-forensics subpackage, which pulls The Sleuth Kit (TSK) into virt-rescue.

Parts of TSK will also be made available as libguestfs APIs so they are callable from other programs.

2 Comments

Filed under Uncategorized