nbdkit now supports LUKS encryption

nbdkit, our permissively licensed plugin-based Network Block Device server can now transparently decode encrypted disks, for both reading and writing:

qemu-img create -f luks --object secret,data=SECRET,id=sec0 -o key-secret=sec0 encrypted-disk.img 1G

nbdkit file encrypted-disk.img --filter=luks passphrase=+/tmp/secret

We use LUKSv1 as the encryption format. That’s an older version [more on that in a moment] of the format used for Full Disk Encryption on Linux. It’s much preferable to use LUKS rather than using qemu’s built-in qcow2 encryption, and our implementation is compatible with qemu’s.

You can place the filter on top of other nbdkit plugins, like Curl:

nbdkit curl https://example.com/encrypted-disk.img --filter=luks passphrase=+/tmp/secret

The threat model here is that you can store the encrypted data on a remote server, and the admin of the server cannot decrypt the disk (assuming you don’t give them the passphrase).

If you try this filter (or qemu’s device) with a modern Linux LUKS disk you’ll find that it doesn’t work. This is because modern Linux uses LUKSv2, although they are able to create, read and write LUKSv1 if you use set them up that way in advance. Unfortunately LUKSv2 is significantly more complicated than LUKSv1. It requires parsing JSON data(!) stored in the header, and supports a wider range of password derivation functions, typically the very slow and memory-intensive argon2. LUKSv1 by contrast only requires support for PBKDF2 and is generally far more straightforward to implement.

The new filter will be available in nbdkit 1.32, or you can grab the development version now.

2 Comments

Filed under Uncategorized

2 responses to “nbdkit now supports LUKS encryption

  1. Sunil

    A general question – has nbdkit been examined for containers support? For instance, is there a hard dependency on using /dev/nbdx for client side support. Does libnbd help eliminate this dependency. Its not very clear from the documentations.
    I’m thinking about two containers in possibly different hosts with nbd mounts. While I have nbd drivers and packages in the system and containers, I’m unable to see the /dev/nbdx devices in the podman containers.

    • rich

      nbdkit is a server. The kernel (nbd-client + /dev/nbdX devices) is one possible client. There are various other clients including libnbd and qemu which don’t use the kernel at all.

      Now if you want to cross-mount filesystems which are mounted on NBD block devices you have a few options for clients. Obviously the kernel is one option. You could create /dev/nbdX and mount the filesystem from it outside the container and export that filesystem into the container. You may be able to find a way to export the /dev/nbdX device into the container (I don’t know how – but usually it’s possible with a privileged container), and mount inside.

      If you want a completely kernel-free userspace client then libguestfs can access filesystems on NBD directly, eg:

      $ guestfish --rw --format=raw -a 'nbd://localhost'
      

      libnbd can be used to access the blocks in the device, but doesn’t solve how to mount it. nbdfuse can turn it into a block device (without needing root), and you may be able to loop mount it.

      qemu also has an NBD client but by this point you’ll need to use a VM instead of a container.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.