Tag Archives: nbd

nbdkit + libblkio

Our plugin-based Network Block Device server, nbdkit, now has support for libblkio.

libblkio is a library written by Stefan Hajnoczi, Alberto Faria, Stefano Garzarella and others for accessing some somewhat unusual disk protocols including vhost-user, NVMe, vDPA, VFIO and io_uring which I’ll talk about below. It’s important to know that these are not disk formats (like raw or qcow2), but accelerated protocols for talking to virtual or real hardware.

The library is written in Rust (but offers a C API) and I believe it’s intended to replace various bottom-end parts of the qemu block layer at some point in the future.

The library uses a set of property strings to describe how to connect to a device. The nbdkit plugin maps those almost exactly into command line parameters, so you can usually follow the libblkio docs and translate that into an nbdkit command line, eg:

$ nbdkit blkio io_uring path=fedora.img

This sets the libblkio driver to “io_uring” and the path to the path of a local file. This libblkio driver uses Linux’s relatively new io_uring facility to access a local file or block device, the simplest way to use libblkio.

The other most frequently used protocol or libblkio driver is vhost-user. This is a protocol that allows a server to share a disk image to client(s) on the same machine. It uses a Unix domain socket for communication, but unlike Network Block Device (NBD) it’s not possible to use this over the network. For greater performance vhost uses shared memory between the client and server for data transfer.

qemu-storage-daemon is the most common server:

$ qemu-storage-daemon \

--blockdev driver=file,node-name=file,filename=fedora.qcow2 \

--blockdev driver=qcow2,node-name=qcow2,file=file \

--export type=vhost-user-blk,id=export,addr.type=unix,addr.path=sock,node-name=qcow2

To connect from nbdkit, just use the socket:

$ nbdkit blkio virtio-blk-vhost-user path=sock

You might wonder why we want to add libblkio support to nbdkit (apart from it being fun). There’s a practical reason which is this brings along all of the scripting support we’ve created around NBD to these somewhat obscure (albeit quite widely used) protocols. I don’t think it was possible before to use Python to script against, eg., vhost-user, but now it is:

$ nbdsh -u nbd://localhost -c 'print("%r" % h.pread(512,0))'

Leave a comment

Filed under Uncategorized

An NBD block device written using Linux ublk (user block device)

Commits [1] and [2] and more here.

ublk is a Linux-only io_uring-based user block device. It lets you write block devices in userspace. nbdublk is an NBD client written using ublk.

# modprobe ublk_drv
# nbdublk /dev/ublkb0 nbd://remote
# ublk list

# blockdev --getsize64 /dev/ublkb0
# mke2fs /dev/ublkb0
# (etc)

# ublk del -n 0

Leave a comment

Filed under Uncategorized

nbdkit for macOS

nbdkit, our high performance, portable Network Block Device server has now been ported to macOS. It’s a command line tool and macOS is sufficiently FreeBSD-like that the port wasn’t very hard. It’s relatively full featured, including a large portion of the plugins and filters, a brand new exit-with-parent implementation, and almost all tests passing.

However one larger problem remains (for performance) which is the lack of atomic CLOEXEC when opening pipes or sockets. Linux has pipe2 and accept4. I wasn’t able to find any good equivalent on macOS, and hence most of the time we are limited to serializing some requests that could otherwise run in parallel.

nbdkit already supported Linux, FreeBSD, OpenBSD, Haiku and Windows!

Leave a comment

Filed under Uncategorized

Composable tools for disk images

Over the past 3 or 4 years, my colleagues and I at Red Hat have been making a set of composable command line tools for handling virtual machine disk images. These let you copy, create, manipulate, display and modify disk images using simple tools that can be connected together in pipelines, while at the same time working very efficiently. It’s all based around the very efficient Network Block Device (NBD) protocol and NBD URI specification.

A basic and very old tool is qemu-img:

$ qemu-img create -f qcow2 disk.qcow2 1G

which creates an empty disk image in qcow2 format. Suppose you want to write into this image? We can compose a few programs:

$ touch disk.raw
$ nbdfuse disk.raw [ qemu-nbd -f qcow2 disk.qcow2 ] &

This serves the qcow2 file up over NBD (qemu-nbd) and then exposes that as a local file using FUSE (nbdfuse). Of interest here, nbdfuse runs and manages qemu-nbd as a subprocess, cleaning it up when the FUSE file is unmounted. We can partition the file using regular tools:

$ gdisk disk.raw
Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-2097118, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-2097118, default = 2097118) or {+-}size{KMGTP}: 
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'
Command (? for help): p
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2097118   1023.0 MiB  8300  Linux filesystem
Command (? for help): w

Let’s fill that partition with some files using guestfish and unmount it:

$ guestfish -a disk.raw run : \
  mkfs ext2 /dev/sda1 : mount /dev/sda1 / : \
  copy-in ~/libnbd /
$ fusermount3 -u disk.raw
[1]+  Done    nbdfuse disk.raw [ qemu-nbd -f qcow2 disk.qcow2 ]

Now the original qcow2 file is no longer empty but populated with a partition, a filesystem and some files. We can see the space used by examining it with virt-df:

$ virt-df -a disk.qcow2 -h
Filesystem                Size   Used  Available  Use%
disk.qcow2:/dev/sda1     1006M    52M       903M    6%

Now let’s see the first sector. You can’t just “cat” a qcow2 file because it’s a complicated format understood only by qemu. I can assemble qemu-nbd, nbdcopy and hexdump into a pipeline, where qemu-nbd converts the qcow2 format to raw blocks, and nbdcopy copies those out to a pipe:

$ nbdcopy -- [ qemu-nbd -r -f qcow2 disk.qcow2 ] - | \
  hexdump -C -n 512
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000001c0  02 00 ee 8a 08 82 01 00  00 00 ff ff 1f 00 00 00  |................|
000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|
00000200

How about instead of a local file, we start with a disk image hosted on a web server, and compressed? We can do that too. Let’s start by querying the size by composing nbdkit’s curl plugin, xz filter and nbdinfo. nbdkit’s --run option composes nbdkit with an external program, connecting them together over an NBD URI ($uri).

$ web=http://mirror.bytemark.co.uk/fedora/linux/development/rawhide/Cloud/x86_64/images/Fedora-Cloud-Base-Rawhide-20220127.n.0.x86_64.raw.xz
$ nbdkit curl --filter=xz $web --run 'nbdinfo $uri'
protocol: newstyle-fixed without TLS
export="":
	export-size: 5368709120 (5G)
	content: DOS/MBR boot sector, extended partition table (last)
	uri: nbd://localhost:10809/
...

Notice it prints the uncompressed (raw) size. Fedora already provides a qcow2 equivalent, but we can also make our own by composing nbdkit, curl, xz, nbdcopy and qemu-nbd:

$ qemu-img create -f qcow2 cloud.qcow2 5368709120 -o preallocation=metadata
$ nbdkit curl --filter=xz $web \
    --run 'nbdcopy -p -- $uri [ qemu-nbd -f qcow2 cloud.qcow2 ]'

Why would you do that instead of downloading and uncompressing? In this case it wouldn’t matter much, but in the general case the disk image might be enormous (terabytes) and you don’t have enough local disk space to do it. Assembling tools into pipelines means you don’t need to keep an intermediate local copy at any point.

We can find out what we’ve got in our new image using various tools:

$ qemu-img info cloud.qcow2 
image: cloud.qcow2
file format: qcow2
virtual size: 5 GiB (5368709120 bytes)
disk size: 951 MiB
$ virt-df -a cloud.qcow2  -h
Filesystem              Size       Used  Available  Use%
cloud.qcow2:/dev/sda2   458M        50M       379M   12%
cloud.qcow2:/dev/sda3   100M       9.8M        90M   10%
cloud.qcow2:/dev/sda5   4.4G       311M       3.6G    7%
cloud.qcow2:btrfsvol:/dev/sda5/root
                        4.4G       311M       3.6G    7%
cloud.qcow2:btrfsvol:/dev/sda5/home
                        4.4G       311M       3.6G    7%
cloud.qcow2:btrfsvol:/dev/sda5/root/var/lib/portables
                        4.4G       311M       3.6G    7%
$ virt-cat -a cloud.qcow2 /etc/redhat-release
Fedora release 36 (Rawhide)

If we wanted to play with the guest in a sandbox, we could stand up an in-memory NBD server populated with the cloud image and connect it to qemu using standard NBD URIs:

$ nbdkit memory 10G
$ qemu-img convert cloud.qcow2 nbd://localhost 
$ virt-customize --format=raw -a nbd://localhost \
    --root-password password:123456 
$ qemu-system-x86_64 -machine accel=kvm \
    -cpu host -m 2048 -serial stdio \
    -drive file=nbd://localhost,if=virtio 
...
fedora login: root
Password: 123456

# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0     11:0    1 1024M  0 rom  
zram0  251:0    0  1.9G  0 disk [SWAP]
vda    252:0    0   10G  0 disk 
├─vda1 252:1    0    1M  0 part 
├─vda2 252:2    0  500M  0 part /boot
├─vda3 252:3    0  100M  0 part /boot/efi
├─vda4 252:4    0    4M  0 part 
└─vda5 252:5    0  4.4G  0 part /home
                                /

We can even find out what changed between the in-memory copy and the pristine qcow2 version (quite a lot as it happens):

$ virt-diff --format=raw -a nbd://localhost --format=qcow2 -A cloud.qcow2 
- d 0755       2518 /etc
+ d 0755       2502 /etc
# changed: st_size
- - 0644        208 /etc/.updated
- d 0750        108 /etc/audit
+ d 0750         86 /etc/audit
# changed: st_size
- - 0640         84 /etc/audit/audit.rules
- d 0755         36 /etc/issue.d
+ d 0755          0 /etc/issue.d
# changed: st_size
... for several pages ...

In conclusion, we’ve got a couple of ways to serve disk content over NBD, a set of composable tools for copying, creating, displaying and modifying disk content either from local files or over NBD, and a way to pipe disk data between processes and systems.

We use this in virt-v2v which can suck VMs out of VMware to KVM systems, efficiently, in parallel, and without using local disk space for even the largest guest.

Leave a comment

Filed under Uncategorized

nbdkit now supports LUKS encryption

nbdkit, our permissively licensed plugin-based Network Block Device server can now transparently decode encrypted disks, for both reading and writing:

qemu-img create -f luks --object secret,data=SECRET,id=sec0 -o key-secret=sec0 encrypted-disk.img 1G

nbdkit file encrypted-disk.img --filter=luks passphrase=+/tmp/secret

We use LUKSv1 as the encryption format. That’s an older version [more on that in a moment] of the format used for Full Disk Encryption on Linux. It’s much preferable to use LUKS rather than using qemu’s built-in qcow2 encryption, and our implementation is compatible with qemu’s.

You can place the filter on top of other nbdkit plugins, like Curl:

nbdkit curl https://example.com/encrypted-disk.img --filter=luks passphrase=+/tmp/secret

The threat model here is that you can store the encrypted data on a remote server, and the admin of the server cannot decrypt the disk (assuming you don’t give them the passphrase).

If you try this filter (or qemu’s device) with a modern Linux LUKS disk you’ll find that it doesn’t work. This is because modern Linux uses LUKSv2, although they are able to create, read and write LUKSv1 if you use set them up that way in advance. Unfortunately LUKSv2 is significantly more complicated than LUKSv1. It requires parsing JSON data(!) stored in the header, and supports a wider range of password derivation functions, typically the very slow and memory-intensive argon2. LUKSv1 by contrast only requires support for PBKDF2 and is generally far more straightforward to implement.

The new filter will be available in nbdkit 1.32, or you can grab the development version now.

2 Comments

Filed under Uncategorized

nbdkit 1.24 & libnbd 1.6, new copying tool

As well as nbdkit 1.24 being released on Thursday, its sister project libnbd 1.6 was released at the same time. This comes with an enhanced copying tool called nbdcopy designed to replace some uses of qemu-img convert (note: it’s not a general replacement).

nbdcopy lets you copy from and to NBD servers (nbdkit, qemu-nbd, qemu-storage-daemon, nbd-server), local files, local block devices, pipes/sockets, and stdin/stdout. For example to stream the content of an NBD server:

$ nbdcopy nbd://localhost - | hexdump -C

The “-” character streams to stdout. nbd://localhost is an NBD URI referring to an NBD server that is already running. What if you don’t have an already running server? nbdcopy lets you run one from the command line (and cleans up after). For example this is one way to convert a qcow2 file to raw:

$ nbdcopy -- [ qemu-nbd -f qcow2 disk.qcow ] disk.raw

Here the [ ... ] section starts qemu-nbd as a captive NBD server, exposing privately an NBD endpoint, and nbdcopy copies this to local file disk.raw. (“--” is needed to stop nbdcopy trying to interpret qemu-nbd’s own command line arguments.)

However this post is really about the nbdkit release. How did I test and benchmark nbdcopy? Of course I wrote an nbdkit plugin called nbdkit-sparse-random-plugin. This plugin has two clever features for testing copying tools. Firstly it creates random disks which have the same “shape” as virtual machine disk images (but without the overhead of needing to bother with an actual VM). Secondly it can act as both a source and target for testing copies.

Let’s unpack those two things a bit further.

Virtual machine disk images (especially mostly empty ones) are mostly sparse. Here’s part of the sparse map from a Fedora 32 disk image:

$ virt-builder fedora-32
$ filefrag -e fedora-32.img 
 Filesystem type is: 58465342
 File size of fedora-32.img is 6442450944 (1572864 blocks of 4096 bytes)
  ext:     logical_offset:        physical_offset: length:   expected: flags:
    0:        0..       0:    2038672..   2038672:      1:            
    1:        1..      15:    2176040..   2176054:     15:    2038673:
    2:      256..     271:    2188819..   2188834:     16:    2176295:
    3:      512..    3135:    3650850..   3653473:   2624:    2189075:
    4:     3168..    4463:    3781763..   3783058:   1296:    3653506:
[...]

The new sparse-random plugin generates a disk image which has a similar shape — islands of random data in a sea of sparseness. The algorithm for doing this is quite neat. Because the plugin doesn’t need to store the data, unlike a real disk image, it can generate huge disk images (eg. a terabyte) while using almost no memory. We use a low-overhead, high-quality random number generator and are smart about seeds so that every run of sparse-random with the same seed produces identical output.

The other part of this plugin is how we can use it to test copying tools like nbdcopy and qemu-img convert. My idea was that the plugin could be used both as the source and the target of the copy:

$ nbdkit -U - sparse-random 1T --run ' nbdcopy "$uri" "$uri" '

Here we create a terabyte-sized sparse-random disk, and get nbdcopy to copy from the plugin to the plugin. On reads sparse-random supplies the sparseness and random data. On writes it checks if what is being written matches the content of the plugin, throwing -EIO errors if not. Assuming the copying tool is correctly handling errors, we can both validate the copying tool and benchmark it. And it works with qemu-img convert too:

$ nbdkit -U - sparse-random 1T --run ' qemu-img convert "$uri" "$uri" '

And now we can see which one is faster.

Try it, you may be surprised.

Leave a comment

Filed under Uncategorized

nbdkit 1.24, new data plugin features

nbdkit 1.24 was released on Thursday. It’s our flexible, fast network block device with loads of features. nbdkit-data-plugin, a plugin that lets you create test patterns from the command line gained some interesting new functionality:

$ nbdkit data ' ( 0x55 0xAA )*2048 '

This command worked before as a way to create a repeating test pattern in a disk image. A new feature is you can write a shell script snippet to generate the pattern instead:

$ nbdkit data ' <( while :; do printf "%04x" $((i++)); done ) [:2048] '

This command will create a pattern of characters “0 0 0 0 0 0 0 1 0 0 0 2 0 0 0 3 …” (truncated to 2048 bytes). We could turn that into a block device and display the contents:

# nbd-client localhost /dev/nbd0
# blockdev --getsize64 /dev/nbd0
2048
# dd if=/dev/nbd0 | hexdump -C | head
4+0 records in
4+0 records out
2048 bytes (2.0 kB, 2.0 KiB) copied, 0.000167082 s, 12.3 MB/s
00000000  30 30 30 30 30 30 30 31  30 30 30 32 30 30 30 33  |0000000100020003|
00000010  30 30 30 34 30 30 30 35  30 30 30 36 30 30 30 37  |0004000500060007|
00000020  30 30 30 38 30 30 30 39  30 30 30 61 30 30 30 62  |00080009000a000b|
00000030  30 30 30 63 30 30 30 64  30 30 30 65 30 30 30 66  |000c000d000e000f|
00000040  30 30 31 30 30 30 31 31  30 30 31 32 30 30 31 33  |0010001100120013|
00000050  30 30 31 34 30 30 31 35  30 30 31 36 30 30 31 37  |0014001500160017|
00000060  30 30 31 38 30 30 31 39  30 30 31 61 30 30 31 62  |00180019001a001b|
00000070  30 30 31 63 30 30 31 64  30 30 31 65 30 30 31 66  |001c001d001e001f|
00000080  30 30 32 30 30 30 32 31  30 30 32 32 30 30 32 33  |0020002100220023|
00000090  30 30 32 34 30 30 32 35  30 30 32 36 30 30 32 37  |0024002500260027|
# nbd-client -d /dev/nbd0
# killall nbdkit

The data plugin also lets you read from files which is useful for making disks with random initial data. For example here’s how to create a disk with 16 identical sectors of random data (notice how /dev/random is read in, truncated to 512 bytes, and then 16 copies are made):

$ nbdkit data ' </dev/urandom[:512]*16 '

The plugin can also create sparse disks. You can do this just by moving the current offset using “@”:

$ nbdkit data ' @32768 1 ' --run 'nbdinfo --map "$uri"'
     0       32768    3  hole,zero
 32768           1    0  allocated

We use this plugin quite extensively when testing libnbd.

1 Comment

Filed under Uncategorized

Read and writing VMware .vmdk disks

(This is in answer to an IRC question, but the answer is a bit longer than I can cover in IRC)

Can you read and write at the block level in a .vmdk file? I think the questioner was asking about writing a backup/restore type tool. Using only free software, qemu can do reads. You can attach qemu-nbd to a vmdk file and that will expose the logical blocks as NBD, and you can then read at the block level using libnbd:

#!/usr/bin/python3
import nbd
h = nbd.NBD()
h.connect_systemd_socket_activation(
    ["qemu-nbd", "-t", "/var/tmp/disk.vmdk"])
print("size = %d" % h.get_size())
buf = h.pread(512, 0)
$ ./qemu-test.py 
size = 1073741824

The example is in Python, but libnbd would let you do this from C or other languages just as easily.

While this works fine for reading, I wouldn’t necessarily be sure that writing is safe. The vmdk format is complex, baroque and only lightly documented, and the only implementation I’d trust is the one from VMware.

So as long as you’re prepared to use a bit of closed source software and agree with the (nasty) license, VDDK is the safer choice. You can isolate your own software from VDDK using our nbdkit plugin.

#!/usr/bin/python3
import nbd
h = nbd.NBD()
h.connect_command(
    ["nbdkit", "-s", "--exit-with-parent",
     "vddk", "libdir=/var/tmp/vmware-vix-disklib-distrib",
     "file=/var/tmp/disk.vmdk"])
print("size = %d" % h.get_size())
buf = h.pread(512, 0)
h.pwrite(buf, 512)

I quite like how we’re using small tools and assembling them together into a pipeline in just a few lines of code:

┌─────────┬────────┐          ┌─────────┬────────┐
│ your    │ libnbd │   NBD    │ nbdkit  │ VDDK   │
│ program │     ●──────────────➤        │        │
└─────────┴────────┘          └─────────┴────────┘
                                          disk.vmdk

One advantage of this approach is that it exposes the extents in the disk which you can iterate over using libnbd APIs. For a backup tool this would let you save the disk efficiently, or do change-block tracking.

1 Comment

Filed under Uncategorized

Loop mount an S3 or Ceph object

This is a fun, small nbdkit Python plugin using the Boto3 AWS SDK:

#!/usr/sbin/nbdkit python

import nbdkit
import boto3
from contextlib import closing

API_VERSION = 2

def thread_model():
    return nbdkit.THREAD_MODEL_PARALLEL

def config(key, value):
    global access_key, secret_key, endpoint_url, bucket_name, key_name

    if key == "access-key" or key == "access_key":
        access_key = value
    elif key == "secret-key" or key == "secret_key":
        secret_key = value
    elif key == "endpoint-url" or key == "endpoint_url":
        endpoint_url = value
    elif key == "bucket":
        bucket_name = value
    elif key == "key":
        key_name = value
    else:
        raise Exception("unknown parameter %s" % key)

def open(readonly):
    global access_key, secret_key, endpoint_url

    s3 = boto3.client("s3",
                      aws_access_key_id = access_key,
                      aws_secret_access_key = secret_key,
                      endpoint_url = endpoint_url)
    if s3 is None:
        raise Exception("could not connect to S3")
    return s3

def get_size(s3):
    global bucket_name, key_name

    resp = s3.get_object(Bucket = bucket_name, Key = key_name)
    size = resp['ResponseMetadata']['HTTPHeaders']['content-length']
    return int(size)

def pread(s3, buf, offset, flags):
    global bucket_name, key_name

    size = len(buf)
    rnge = 'bytes=%d-%d' % (offset, offset+size-1)
    resp = s3.get_object(Bucket = bucket_name, Key = key_name, Range = rnge)
    body = resp['Body']
    with closing(body):
        buf[:] = body.read(size)

This lets you loop mount a single object (file):

$ ./nbdkit-S3-plugin -f -v -U /tmp/sock \
  access_key="XYZ" secret_key="XYZ" \
  bucket="my_files" key="fedora-28.iso"
$ sudo nbd-client -b 2048 -unix /tmp/sock /dev/nbd0
Negotiation: ..size = 583MB
$ ls /dev/nbd0
 nbd0    nbd0p1  nbd0p2  
$ sudo mount -o ro /dev/nbd0p1 /tmp/mnt
$ ls -l /tmp/mnt
 total 11
 dr-xr-xr-x. 3 root root 2048 Apr 25  2018 EFI
 -rw-r--r--. 1 root root 2532 Apr 23  2018 Fedora-Legal-README.txt
 dr-xr-xr-x. 3 root root 2048 Apr 25  2018 images
 drwxrwxr-x. 2 root root 2048 Apr 25  2018 isolinux
 -rw-r--r--. 1 root root 1063 Apr 21  2018 LICENSE
 -r--r--r--. 1 root root  454 Apr 25  2018 TRANS.TBL

I should note this is a bit different from s3fs which is a FUSE driver that mounts all the files in a bucket.

Leave a comment

Filed under Uncategorized

Ridiculously big “files”

In the last post I showed how you can combine nbdfuse with nbdkit’s RAM disk to mount a RAM disk as a local file. In a talk I gave at FOSDEM last year I described creating these absurdly large RAM-backed filesystems and you can do the same thing now to create ridiculously big “files”. Here’s a 7 exabyte file:

$ touch /var/tmp/disk.img
$ nbdfuse /var/tmp/disk.img --command nbdkit -s memory 7E &
$ ll /var/tmp/disk.img 
 -rw-rw-rw-. 1 rjones rjones 8070450532247928832 Nov  4 13:37 /var/tmp/disk.img
$ ls -lh /var/tmp/disk.img 
 -rw-rw-rw-. 1 rjones rjones 7.0E Nov  4 13:37 /var/tmp/disk.img

What can you actually do with this file, and more importantly does anything break? As in the talk, creating a Btrfs filesystem boringly just works. mkfs.ext4 spins using 100% of CPU. I let it go for 15 minutes but it seemed no closer to either succeeding or crashing. Update: As Ted pointed out in the comments, it’s likely I didn’t mean mkfs.ext4 here, which gives an appropriate error, but mkfs.xfs which consumes more and more space, appearing to spin.

Emacs said:

File disk.img is large (7 EiB), really open? (y)es or (n)o or (l)iterally

and I was too chicken to find out what it would do if I really opened it.

I do wonder if there’s a DoS attack here if I leave this seemingly massive regular file lying around in a public directory.

3 Comments

Filed under Uncategorized