Tag Archives: linux

nbdkit with BitTorrent

nbdkit is our high performance Network Block Device server for serving disk images from unusual sources. One (usual) source for Linux installers is to download an ISO from a website like Get Fedora or debian.org. However that costs the host money and is also a central point of failure, so another way to download Linux installers is over BitTorrent. Many Linux distros offer torrents of their installers including Fedora and Debian. By using these you are helping to redistribute Linux and defraying the cost of hosting these ISOs.

Now I’ve written a BitTorrent plugin for nbdkit so you can download, redistribute and install Linux all at the same time!

$ url=https://torrent.fedoraproject.org/torrents/Fedora-Server-dvd-x86_64-32.torrent
$ wget $url
$ nbdkit -U - torrent Fedora-Server-*.torrent \
         --run 'qemu-system-x86_64 -m 2048 -cdrom $nbd -boot d'

So what’s the serious use for this? It has the interesting property that the more people who are installing your Linux distro, the less bandwidth it uses and the faster it runs! This could be interesting technology for any kind of distributed environment where you have lots of machines accessing the same fixed/read-only filesystem or disk image.

If you want to get started with nbdkit it’s already in all popular Linux distributions, and compiles from source on Linux, FreeBSD and OpenBSD.

Leave a comment

Filed under Uncategorized

New nbdkit “remote tmpfs” (tmpdisk plugin)

I was making some thin clients for the Fedora RISC-V project a few weeks ago. These are based on the HiFive Unleashed U540 board and so they have no local SATA, only slow, unreliable SD cards. Any filesystems that might be heavily used must be network filesystems.

As these are Linux clients we essentially have three possible choices for network filesystems: NFS, NBD or a cluster FS. As I didn’t want to set up multiple server nodes or need the redundancy, cluster filesystems are immediately discounted. NFS is — “fine”. And indeed I selected that for /home where performance is not a problem. But for the clients that are build servers I selected NBD as a high-performance block-based storage. I’m using nbdkit, the high performance flexible NBD server.

However nbdkit didn’t quite do what I wanted, so I had to write a new plugin. I needed a “remote tmpfs“.

To get a new “remote tmpfs”, a fresh filesystem each time, the client does:

modprobe nbd
nbd-client server /dev/nbd0
mount /dev/nbd0 /var/scratch

Backing this is a flexible new plugin called tmpdisk. By default it creates a new disk for each connection, formats it with mkfs and serves it to the client. But it’s also scriptable, so you can substitute any command you like instead of mkfs to create your own custom disks (I recommend looking at mke2fs -d in case you need to have pre-populated scratch disks).

It’s important to note that these are scratch disks so when the client unmounts the filesystem it is completely deleted from the server. But that’s fine for temporary directories and (for my purposes) package builds.

To find out more about nbdkit, watch my video from FOSDEM 2019.

2 Comments

Filed under Uncategorized

New nbdkit data strings

You can use nbdkit, our infinitely flexible Network Block Device server to serve small disks and test images with the nbdkit data plugin. For example you can cut and paste this command into your shell to demonstrate a bootable disk image which prints “hello, world”:

nbdkit data data='
    0xb4 0 0xb0 3 0xcd 0x10 0xb4 0x13
    0xb3 0x0a 0xb0 1 0xb9 0x0e 0 0xb6
    0 0xb2 0 0xbd 0x19 0x7c 0xcd 0x10
    0xf4 0x68 0x65 0x6c 0x6c 0x6f 0x2c 0x20
    0x77 0x6f 0x72 0x6c 0x64 0x0d 0x0a
    @0x1fe 0x55 0xaa
' --run 'qemu-system-i386 -fda $nbd'

(As an aside, what is the smallest nbdkit data string that can boot to a “hello, world” message?)

The data parameter is a mini-language, and I recently extended it in an interesting way. It wasn’t possible to make repeated patterns easily before. If you wanted a disk containing 0x55 0xAA repeated (the binary bit patterns 01010101 10101010) then the only way to get that was to literally write:

nbdkit data data='0x55 0xAA 0x55 0xAA [repeated many times ...]'

but now you can group things together and write:

nbdkit data data='( 0x55 0xAA )*256'

The nesting works by recursively creating a new parser, which means you can use any data expression. For example to get 4 sectors containing half blank and half test data you can now do:

nbdkit data data='( @256 ( 0x55 0xAA )*128 )*4'

This gives you lots of way to make disks containing test patterns which you could then use to test Linux programs using /dev/nbd0 loop devices.

Leave a comment

Filed under Uncategorized

Cascade – a turn-based text arcade game

cascade

I wrote this game about 20 years ago. Glad to see it still compiled out of the box on the latest Linux distro! Download it from here. If anyone can remember the name or any details of the original 1980s MS-DOS game that I copied the idea from, please let me know in the comments.

Leave a comment

Filed under Uncategorized

AMD Ryzen 9 3900X – nice!

Screenshot_2019-09-04_11-08-41

This thing really screams. It’s nice being able to do make -j24 (threads) builds so quickly.

Leave a comment

Filed under Uncategorized

NBD’s state machine

states

Eric and I are writing a Linux NBD client library. There were lots of requirements but the central one for this post is it has to be a library callable from programs written in C and other programming languages (Python, OCaml and Rust being important), and we don’t control those programs so they may be single or multithreaded, or may use non-blocking main loops like gio and glib.

An NBD command involves sending a request over a socket to a remote server and receiving a reply. You can also have multiple requests “in flight” and the reply can be received in multiple parts. On top of this the “fixed newstyle” NBD protocol has a complex multi-step initial handshake. Complicating it further we might be using a TLS transport which has its own handshake.

It’s complicated and we mustn’t block the caller.

There are a few ways to deal with this in my experience — one is to ignore the problem and insist that the main program uses a thread for each NBD connection, but that pushes complexity onto someone else. Another way is to use some variation of coroutines or call/cc — if we get to a place where we would block then we save the stack, return to the caller, and have some way to restore the stack later. However this doesn’t necessarily work well with non-C programming languages. It likely won’t work with either OCaml or Ruby’s garbage collectors since they both involve stack walking to find GC roots. I’d generally want to avoid “tricksy” stuff in a library.

The final way that I know about is to implement a state machine. However large state machines are hellishly difficult to write. Our state machine has 75 states (so far — it’s nowhere near finished). So we need a lot of help.

I came up with a slightly nicer way to write state machines.

The first idea is that states in a large state machine could be grouped. You can consider each group like a mini state machine — it has its own namespace, lives in a single file (example), and may only be entered via a single START state (so you don’t need to consider what happens if another part of the state machine jumps into the middle of the group).

Secondly groups can be hierarchical. This lets us organise the groups logically, so for example there is a group for “fixed newstyle handshaking” and within that there are separate groups for negotiating each newstyle option. States can refer to each other using either relative or absolute paths in this hierarchy.

Thirdly all states and transitions are defined and checked in a single place, allowing us to enforce rules about what transitions are permitted.

Fourthly the final C code that implements the state machine is generated (mostly). This lets us generate helper functions to (eg) turn state transitions into debug messages, or return whether the connection is in a mode where it’s expecting to read or write from the socket (making it easier to integrate with main loops).

The final code looks like this and generates currently 173K lines of C (although as it’s mostly large switch statements it compiles down to a reasonably small size).

Has anyone else implemented a large state machine in a similar way?

4 Comments

Filed under Uncategorized

nbdkit / FOSDEM test presentation about better loop mounts for Linux

I’ve submitted a talk about nbdkit, our flexible pluggable NBD server, to FOSDEM next February. This is going to be about using NBD as a better way to do loop mounts in Linux.

In preparation I gave a very early version of the talk to a small Red Hat audience.

Video link: http://oirase.annexia.org/rwmj.wp.com/rjones-nbdkit-tech-talk-2018-11-19.mp4

Sorry about the slow start. You may want to skip to 2 mins to get past the intro.

Summary of what’s in the talk:

  1. Demo of regular, plain loop mounting.
  2. Demo of loop mounting an XZ-compressed disk image using NBD + nbdkit.
  3. Slides about how loop device compares to NBD.
  4. Slides about nbdkit plugins and filters.
  5. Using VMware VDDK to access a VMDK file.
  6. Creating a giant disk costing EUR 300 million(!)
  7. Visualizing a single filesystem.
  8. Visualizing RAID 5.
  9. Writing a plugin in shell script (live demo).
  10. Summary.

Screenshot_2018-11-26_17-18-16

2 Comments

Filed under Uncategorized

Run Fedora RISC-V with X11 GUI in your browser

https://bellard.org/jslinux/vm.html?cpu=riscv64&url=https://bellard.org/jslinux/fedora29-riscv-xwin.cfg&graphic=1&mem=256

Without the GUI is a bit faster: https://bellard.org/jslinux/vm.html?cpu=riscv64&url=https://bellard.org/jslinux/fedora29-riscv-2.cfg&mem=256

Leave a comment

Filed under Uncategorized

nbdkit for loopback pt 7: a slow disk

What happens to filesystems and programs when the disk is slow? You can test this using nbdkit and the delay filter. This command creates a 4G virtual disk in memory and injects a 2 second delay into every read operation:

$ nbdkit --filter=delay memory size=4G rdelay=2

You can loopback mount this as a device on the host:

# modprobe nbd
# nbd-client -b 512 localhost /dev/nbd0
Warning: the oldstyle protocol is no longer supported.
This method now uses the newstyle protocol with a default export
Negotiation: ..size = 4096MB
Connected /dev/nbd0

Partitioning and formatting is really slow!

# sgdisk -n 1 /dev/nbd0
Creating new GPT entries in memory.
... sits here for about 10 seconds ...
The operation has completed successfully.
# mkfs.ext4 /dev/nbd0p1
mke2fs 1.44.3 (10-July-2018)
waiting ...

Actually I killed it and decided to restart the test with a smaller delay. Since the memory plugin was rewritten to use a sparse array, we’re serializing all requests as an easy way to lock the sparse array data structure. This doesn’t matter normally because requests to the memory plugin are extremely fast, but once you inject delays this means that every request into nbdkit is serialized. Thus for example two reads issued in parallel at the same time by the kernel are delayed by 2+2 = 4 seconds instead of 2 seconds in total.

However shutting down the NBD connection reveals likely kernel bugs in the NBD driver:

[74176.112087] block nbd0: NBD_DISCONNECT
[74176.112148] block nbd0: Disconnected due to user request.
[74176.112151] block nbd0: shutting down sockets
[74176.112183] print_req_error: I/O error, dev nbd0, sector 6144
[74176.112252] print_req_error: I/O error, dev nbd0, sector 6144
[74176.112257] Buffer I/O error on dev nbd0p1, logical block 4096, async page read
[74176.112260] Buffer I/O error on dev nbd0p1, logical block 4097, async page read
[74176.112263] Buffer I/O error on dev nbd0p1, logical block 4098, async page read
[74176.112265] Buffer I/O error on dev nbd0p1, logical block 4099, async page read
[74176.112267] Buffer I/O error on dev nbd0p1, logical block 4100, async page read
[74176.112269] Buffer I/O error on dev nbd0p1, logical block 4101, async page read
[74176.112271] Buffer I/O error on dev nbd0p1, logical block 4102, async page read
[74176.112274] Buffer I/O error on dev nbd0p1, logical block 4103, async page read

Note nbdkit did not return any I/O errors, but the connection was closed with in-flight delayed requests. Well at least our testing is finding bugs!

I tried again with a 500ms delay and using the file plugin which is fully parallel:

$ rm -f temp
$ truncate -s 4G temp
$ nbdkit --filter=delay file file=temp rdelay=500ms

I was able to partition and create a filesystem more easily on this because of the shorter delay and the fact that parallel kernel requests are delayed “in parallel” [same steps as above], and then mount it on a temp directory:

# mount /dev/nbd0p1 /tmp/mnt

The effect is rather strange, like using an NFS mount from a remote server. Initial file reads are slow, and then they are fast (as they are cached in memory). If you drop Linux caches:

# echo 3 > /proc/sys/vm/drop_caches

then everything becomes slow again.

Confident that parallel requests were being delayed in parallel, I also increased the delay back up to 2 seconds (still using the file plugin). This is like swimming in treacle or what I imagine it would be like to mount an NFS filesystem from the other side of the world over a 56K modem.

I wasn’t able to find any further bugs, but this should be useful for someone who wants to test this kind of thing.

1 Comment

Filed under Uncategorized

nbdkit for loopback pt 6: giant file-backed disks for testing

In part 1 and part 5 of this series I created some giant disks with a virtual size of 263-1 bytes (8 exabytes). However these were stored in memory using nbdkit-memory-plugin so you could never allocate more space in these disks than available RAM plus swap.

This is a problem when testing some filesystems because the filesystem overhead (the space used to store superblocks, inode tables, block free maps and so on) can be 1% or more.

The solution to this is to back the virtual disks using a sparse file instead. XFS lets you create sparse files up to 263-1 bytes and you can serve them using nbdkit-file-plugin instead:

$ rm -f temp
$ truncate -s $(( 2**63 - 1 )) temp
$ stat -c %s temp
9223372036854775807
$ nbdkit file file=temp

nbdkit-file-plugin recently got a lot of updates to ensure it always maintains sparseness where possible and supports efficient zeroing, so make sure you’re using at least nbdkit ≥ 1.6.

Now you can serve this in the ordinary way and you should be able to allocate as much space as is available on the host filesystem:

# nbd-client -b 512 localhost /dev/nbd0
Negotiation: ..size = 8796093022207MB
Connected /dev/nbd0
# blockdev --getsize64 /dev/nbd0
9223372036854774784
# sgdisk -n 1 /dev/nbd0
# gdisk -l /dev/nbd0
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048  18014398509481948   8.0 EiB     8300

This command will still probably fail unless you have a lot of patience and a huge amount of space on your host:

# mkfs.xfs -K /dev/nbd0p1

Leave a comment

Filed under Uncategorized