fio has support for testing NBD servers directly

fio (“flexible I/O tester”) is a key tool for testing performance of filesystems and block devices. A couple of weeks ago I added support for testing NBD servers directly.

You will need libnbd 0.9.8 and to compile fio from git with:

./configure --enable-libnbd
make

Testing either nbdkit or qemu-nbd is trivial. Just follow the instructions here.

Advertisements

Leave a comment

Filed under Uncategorized

libnbd 0.9.8 and stable APIs

I announced libnbd yesterday. The libnbd 0.9.8 is a pre-release for the upcoming 1.0 where we will finalize the API and offer API and ABI stability.

Stable APIs aren’t in fashion these days, but they’re important because people who choose to use your platform for their software shouldn’t be screwed over and have to change their software every time you change your mind. In C it’s reasonably easy to offer a stable API while allowing long term evolution and even incompatible changes. This is what we do for nbdkit and will be doing for libnbd.

The first concept to get to know is ELF symbol versioning. Chapter 3 of Uli’s paper on the subject covers this in great detail. In libnbd all our initial symbols will be labelled with LIBNBD_1.0.

The second concept is not as well known, but is used in glibc, nbdkit and elsewhere: Allowing callers to opt in to newer versions of the API. Let’s say we want to add a new parameter to nbd_connect_uri. The current definition is:

int nbd_connect_uri (struct nbd_handle *h, const char *uri);

but we decide to add flags:

int nbd_connect_uri (struct nbd_handle *h, const char *uri,
                     uint32_t flags);

You can cover up the ABI change with ELF symbol versions. However the API has still changed, and you’re forcing your callers to change their source code eventually. To prevent breaking the API, you can allow callers to opt in to the change by defining a new symbol:

#define LIBNBD_API_VERSION 2
#include <libnbd.h>

The definition is saying “I want to upgrade to the new version and I’ve made the required changes”, but crucially if the definition is missing or has an earlier version number then you continue to offer the earlier version, perhaps with a simple wrapper that calls the new version with a default flags value.

That’s what we intend to do with libnbd to offer a stable API and ABI (once 1.0 is released) without breaking callers either at the source or binary level.

1 Comment

Filed under Uncategorized

Red Hat joins the RISC-V Foundation

Finally …
https://riscv.org/membership/6696/red-hat/

Leave a comment

Filed under Uncategorized

libnbd – A new NBD client library

NBD is a high performance protocol for exporting disks between processes and machines. We use it as a kind of “universal connector” for connecting hypervisors with data sources, and previously myself and Eric Blake wrote a general purpose NBD server called nbdkit. (If you’re interested in the topic of nbdkit as a universal connector, watch my FOSDEM talk.)

Up til now our NBD client has been qemu or one of the qemu tools like qemu-img. That was fine if you wanted to expose a disk source as a running virtual machine (ie. running it with qemu), or if you wanted to perform one of the limited copying operations that qemu-img convert can do, but there were many cases where it would have been nice to have a general client library.

For example I started to add NBD support to Jen Axboe’s FIO. Lacking a client library I synthesized NBD request packets as C structs and sent them on the wire using low level socket commands. The performance was, to put it bluntly, crap.

Although NBD is a very simple protocol and you can write it by hand, it would be nicer to have a library wrap the low-level stuff, and that’s why we have written libnbd (downloads).

Getting reasonable performance from NBD requires a few tricks:

  • You must issue as many commands as possible “in flight” (the server will reply to them out of order, but requests and replies are tied together by a unique ID).
  • You may need to open multiple connections to the server, but doing that requires attention to the special MULTI_CONN flag which the server will use to indicate that this is safe.
  • Most crucially you must disable Nagle’s algorithm.

This isn’t an exhaustive list. In fact while writing libnbd over about 3 weeks we improved performance by a factor of over 15 times, just by paying attention to system calls, maximizing parallelism and minimizing latency. One advantage of libnbd is that it encodes all this knowledge in an easy to use library so NBD clients won’t have to reinvent it in future.

The library has a simple high-level synchronous API which works how you would expect (but doesn’t get the best performance). A typical program might look like:

struct nbd_handle *nbd;
int64_t exportsize;
char buf[512];

nbd = nbd_create ();
if (!nbd) goto error;
if (nbd_connect_tcp (nbd, "localhost", "nbd") == -1)
  goto error;
exportsize = nbd_get_size (nbd);
if (nbd_pread (nbd, buf, sizeof buf, 0, 0) == -1) {
 error:
  fprintf (stderr, "%s\n", nbd_get_error ());
}

To get the best performance you have to use the more low-level asynchronous API which allows you to queue up commands and bring your own main loop.

There are also bindings in OCaml and Python (and Rust, soon). There’s also a nice little shell written in Python so you can access NBD servers interactively:

$ nbdsh
nbd> h.connect_command (["nbdkit", "-s", "memory", "1M"])
nbd> print ("%r" % h.get_size ())
1048576
nbd> h.pwrite (b"12345", 0)
nbd> h.pread (5, 0)
b'12345'

libnbd and the shell, nbdsh, are available now in Fedora 29 and above.

1 Comment

Filed under Uncategorized

NBD’s state machine

states

Eric and I are writing a Linux NBD client library. There were lots of requirements but the central one for this post is it has to be a library callable from programs written in C and other programming languages (Python, OCaml and Rust being important), and we don’t control those programs so they may be single or multithreaded, or may use non-blocking main loops like gio and glib.

An NBD command involves sending a request over a socket to a remote server and receiving a reply. You can also have multiple requests “in flight” and the reply can be received in multiple parts. On top of this the “fixed newstyle” NBD protocol has a complex multi-step initial handshake. Complicating it further we might be using a TLS transport which has its own handshake.

It’s complicated and we mustn’t block the caller.

There are a few ways to deal with this in my experience — one is to ignore the problem and insist that the main program uses a thread for each NBD connection, but that pushes complexity onto someone else. Another way is to use some variation of coroutines or call/cc — if we get to a place where we would block then we save the stack, return to the caller, and have some way to restore the stack later. However this doesn’t necessarily work well with non-C programming languages. It likely won’t work with either OCaml or Ruby’s garbage collectors since they both involve stack walking to find GC roots. I’d generally want to avoid “tricksy” stuff in a library.

The final way that I know about is to implement a state machine. However large state machines are hellishly difficult to write. Our state machine has 75 states (so far — it’s nowhere near finished). So we need a lot of help.

I came up with a slightly nicer way to write state machines.

The first idea is that states in a large state machine could be grouped. You can consider each group like a mini state machine — it has its own namespace, lives in a single file (example), and may only be entered via a single START state (so you don’t need to consider what happens if another part of the state machine jumps into the middle of the group).

Secondly groups can be hierarchical. This lets us organise the groups logically, so for example there is a group for “fixed newstyle handshaking” and within that there are separate groups for negotiating each newstyle option. States can refer to each other using either relative or absolute paths in this hierarchy.

Thirdly all states and transitions are defined and checked in a single place, allowing us to enforce rules about what transitions are permitted.

Fourthly the final C code that implements the state machine is generated (mostly). This lets us generate helper functions to (eg) turn state transitions into debug messages, or return whether the connection is in a mode where it’s expecting to read or write from the socket (making it easier to integrate with main loops).

The final code looks like this and generates currently 173K lines of C (although as it’s mostly large switch statements it compiles down to a reasonably small size).

Has anyone else implemented a large state machine in a similar way?

4 Comments

Filed under Uncategorized

virt-install + nbdkit live install

This seems to be completely undocumented which is why I’m writing this … It is possible to boot a Linux guest (Fedora in this case) from a live CD on a website without downloading it. I’m using our favourite flexible NBD server, nbdkit and virt-install.

First of all we’ll run nbdkit and attach it to the Fedora 29 live workstation ISO. To make this work more efficiently I’m going to place a couple of filters on top — one is the readahead (prefetch) filter recently added to nbdkit 1.12, and the other is the cache filter. In combination these filters should reduce the load on the website and improve local performance.

$ rm /tmp/socket
$ nbdkit -f -U /tmp/socket --filter=readahead --filter=cache \
    curl https://download.fedoraproject.org/pub/fedora/linux/releases/29/Workstation/x86_64/iso/Fedora-Workstation-Live-x86_64-29-1.2.iso

I actually replaced that URL with a UK-based mirror to make the process a little faster.

Now comes the undocumented virt-install command:

$ virt-install --name test --ram 2048 \
    --disk /var/tmp/disk.img,size=10 
    --disk device=cdrom,source_protocol=nbd,source_host_transport=unix,source_host_socket=/tmp/socket \
    --os-variant fedora29

After a bit of grinding that should boot into Fedora 29, and you never (not explicitly at least) had to download the ISO.

Screenshot_2019-04-13_10-30-00

To be fair qemu does also have a curl driver which virt-install could use, but nbdkit is better with the filters and plugins system giving you ultimate flexibility — check out my video about it.

1 Comment

Filed under Uncategorized

nbdkit 1.12

The new stable release of nbdkit, our flexible Network Block Device server, is out. You can read the announcement and release notes here.

The big new features are SSH support, the linuxdisk plugin, writing plugins in Rust, and extents. Extents allows NBD clients to work out which parts of a disk are sparse or zeroes and skip reading them. It was hellishly difficult to write because of the number of obscure corner cases.

Also in this release, are a couple of interesting filters. The rate filter lets you add a bandwidth limit to connections. We will use this in virt-v2v to allow v2v instances to be rate limited (even dynamically). The readahead filter makes sequential copying and scanning of plugins more efficient by prefetching data ahead of time. It is self-configuring and in most cases simply adding the filter into your filter stack is sufficient to get a nice performance boost, assuming your client’s access patterns are mostly sequential.

1 Comment

Filed under Uncategorized