Without the GUI is a bit faster: https://bellard.org/jslinux/vm.html?cpu=riscv64&url=https://bellard.org/jslinux/fedora29-riscv-2.cfg&mem=256
Category Archives: Uncategorized
virt-builder is a tool for rapidly creating customized Linux images. Recently I’ve added support for Windows although for rather obvious licensing reasons we cannot distribute the Windows templates which would be needed to provide Windows support for everyone. However you can build your own Windows templates as described here and then:
$ virt-builder -l | grep windows windows-10.0-server x86_64 Windows Server 2016 (x86_64) windows-6.2-server x86_64 Windows Server 2012 (x86_64) windows-6.3-server x86_64 Windows Server 2012 R2 (x86_64) $ virt-builder windows-6.3-server [ 0.6] Downloading: http://xx/builder/windows-6.3-server.xz [ 5.1] Planning how to build this image [ 5.1] Uncompressing [ 60.1] Opening the new disk [ 77.6] Setting a random seed virt-builder: warning: random seed could not be set for this type of guest virt-builder: warning: passwords could not be set for this type of guest [ 77.6] Finishing off Output file: windows-6.3-server.img Output size: 10.0G Output format: raw Total usable space: 9.7G Free space: 3.5G (36%)
To build a Windows template repository you will need the latest libguestfs sources checked out from https://github.com/libguestfs/libguestfs and you will also need a suitable Windows Volume License, KMS or MSDN developer subscription. Also the final Windows templates are at least ten times larger than Linux templates, so virt-builder operations take correspondingly longer and use lots more disk space.
First download install ISOs for the Windows guests you want to use.
After cloning the latest libguestfs sources, go into the
builder/templates subdirectory. Edit the top of the
make-template.ml script to set the path which contains the Windows ISOs. You will also possibly need to edit the names of the ISOs later in the script.
Build a template, eg:
$ ../../run ./make-template.ml windows 2k12 x86_64
You’ll need to read the script to understand what the arguments do. The script will ask you for the product key, where you should enter the volume license key or your MSDN key.
Each time you run the script successfully you’ll end up with two files called something like:
The version numbers are Windows internal version numbers.
After you’ve created templates for all the Windows guest types you need, copy them to any (private) web server, and concatenate all the index fragments into the final index file:
$ cat *.index-fragment > index
Finally create a virt-builder repo file pointing to this index file:
# cat /etc/virt-builder/repos.d/windows.conf [windows] uri=http://xx/builder/index
You can now create Windows guests in virt-builder. However note they are not sysprepped. We can’t do this because it requires some Windows tooling. So while these guests are good for small tests and similar, they’re not suitable for creating actual Windows long-lived VMs. To do that you will need to add a sysprep.exe step somewhere in the template creation process.
nbdkit is a pluggable NBD server and you can write plugins in C or several other scripting languages. But not shell script – until now. Shell script turns out to be a reasonably nice language for this:
case "$1" in get_size) stat -L -c '%s' $f || exit 1 ;; pread) dd iflag=skip_bytes,count_bytes skip=$4 count=$3 if=$f || exit 1 ;; pwrite) dd oflag=seek_bytes conv=notrunc seek=$4 of=$f || exit 1 ;;
coreutils util-linux provides the fallocate program we can even implement efficient trim and zeroing operations.
What happens to filesystems and programs when the disk is slow? You can test this using nbdkit and the delay filter. This command creates a 4G virtual disk in memory and injects a 2 second delay into every read operation:
$ nbdkit --filter=delay memory size=4G rdelay=2
You can loopback mount this as a device on the host:
# modprobe nbd # nbd-client -b 512 localhost /dev/nbd0 Warning: the oldstyle protocol is no longer supported. This method now uses the newstyle protocol with a default export Negotiation: ..size = 4096MB Connected /dev/nbd0
Partitioning and formatting is really slow!
# sgdisk -n 1 /dev/nbd0 Creating new GPT entries in memory. ... sits here for about 10 seconds ... The operation has completed successfully. # mkfs.ext4 /dev/nbd0p1 mke2fs 1.44.3 (10-July-2018) waiting ...
Actually I killed it and decided to restart the test with a smaller delay. Since the memory plugin was rewritten to use a sparse array, we’re serializing all requests as an easy way to lock the sparse array data structure. This doesn’t matter normally because requests to the memory plugin are extremely fast, but once you inject delays this means that every request into nbdkit is serialized. Thus for example two reads issued in parallel at the same time by the kernel are delayed by 2+2 = 4 seconds instead of 2 seconds in total.
However shutting down the NBD connection reveals likely kernel bugs in the NBD driver:
[74176.112087] block nbd0: NBD_DISCONNECT [74176.112148] block nbd0: Disconnected due to user request. [74176.112151] block nbd0: shutting down sockets [74176.112183] print_req_error: I/O error, dev nbd0, sector 6144 [74176.112252] print_req_error: I/O error, dev nbd0, sector 6144 [74176.112257] Buffer I/O error on dev nbd0p1, logical block 4096, async page read [74176.112260] Buffer I/O error on dev nbd0p1, logical block 4097, async page read [74176.112263] Buffer I/O error on dev nbd0p1, logical block 4098, async page read [74176.112265] Buffer I/O error on dev nbd0p1, logical block 4099, async page read [74176.112267] Buffer I/O error on dev nbd0p1, logical block 4100, async page read [74176.112269] Buffer I/O error on dev nbd0p1, logical block 4101, async page read [74176.112271] Buffer I/O error on dev nbd0p1, logical block 4102, async page read [74176.112274] Buffer I/O error on dev nbd0p1, logical block 4103, async page read
Note nbdkit did not return any I/O errors, but the connection was closed with in-flight delayed requests. Well at least our testing is finding bugs!
I tried again with a 500ms delay and using the file plugin which is fully parallel:
$ rm -f temp $ truncate -s 4G temp $ nbdkit --filter=delay file file=temp rdelay=500ms
I was able to partition and create a filesystem more easily on this because of the shorter delay and the fact that parallel kernel requests are delayed “in parallel” [same steps as above], and then mount it on a temp directory:
# mount /dev/nbd0p1 /tmp/mnt
The effect is rather strange, like using an NFS mount from a remote server. Initial file reads are slow, and then they are fast (as they are cached in memory). If you drop Linux caches:
# echo 3 > /proc/sys/vm/drop_caches
then everything becomes slow again.
Confident that parallel requests were being delayed in parallel, I also increased the delay back up to 2 seconds (still using the file plugin). This is like swimming in treacle or what I imagine it would be like to mount an NFS filesystem from the other side of the world over a 56K modem.
I wasn’t able to find any further bugs, but this should be useful for someone who wants to test this kind of thing.
In part 1 and part 5 of this series I created some giant disks with a virtual size of 263-1 bytes (8 exabytes). However these were stored in memory using nbdkit-memory-plugin so you could never allocate more space in these disks than available RAM plus swap.
This is a problem when testing some filesystems because the filesystem overhead (the space used to store superblocks, inode tables, block free maps and so on) can be 1% or more.
The solution to this is to back the virtual disks using a sparse file instead. XFS lets you create sparse files up to 263-1 bytes and you can serve them using nbdkit-file-plugin instead:
$ rm -f temp $ truncate -s $(( 2**63 - 1 )) temp $ stat -c %s temp 9223372036854775807 $ nbdkit file file=temp
nbdkit-file-plugin recently got a lot of updates to ensure it always maintains sparseness where possible and supports efficient zeroing, so make sure you’re using at least nbdkit ≥ 1.6.
Now you can serve this in the ordinary way and you should be able to allocate as much space as is available on the host filesystem:
# nbd-client -b 512 localhost /dev/nbd0 Negotiation: ..size = 8796093022207MB Connected /dev/nbd0 # blockdev --getsize64 /dev/nbd0 9223372036854774784 # sgdisk -n 1 /dev/nbd0 # gdisk -l /dev/nbd0 Number Start (sector) End (sector) Size Code Name 1 2048 18014398509481948 8.0 EiB 8300
This command will still probably fail unless you have a lot of patience and a huge amount of space on your host:
# mkfs.xfs -K /dev/nbd0p1
Thanks Chris Murphy for noting that btrfs can create and mount 8 EB (approx 263 byte) filesystems effortlessly:
$ nbdkit -fv memory size=$(( 2**63-1 ))
# modprobe nbd # nbd-client -b 512 localhost /dev/nbd0 # blockdev --getss /dev/nbd0 512 # gdisk /dev/nbd0 Number Start (sector) End (sector) Size Code Name 1 2048 18014398509481948 8.0 EiB 8300 Linux filesystem # mkfs.btrfs -K /dev/nbd0p1 btrfs-progs v4.16 See http://btrfs.wiki.kernel.org for more information. Detected a SSD, turning off metadata duplication. Mkfs with -m dup if you want to force metadata duplication. Label: (null) UUID: 770e5592-9055-4551-8416-b6376802a2ad Node size: 16384 Sector size: 4096 Filesystem size: 8.00EiB Block group profiles: Data: single 8.00MiB Metadata: single 8.00MiB System: single 4.00MiB SSD detected: yes Incompat features: extref, skinny-metadata Number of devices: 1 Devices: ID SIZE PATH 1 8.00EiB /dev/nbd0p1 # mount /dev/nbd0p1 /tmp/mnt # df -h /tmp/mnt Filesystem Size Used Avail Use% Mounted on /dev/nbd0p1 8.0E 17M 8.0E 1% /tmp/mnt
I created a few files in there and it all seemed to work although I didn’t do any extensive testing. Good job btrfs!
nbdkit is a pluggable NBD server with lots of plugins and filters. Two of the plugins handle compressed files (for gzip and xz respectively). We can uncompress and serve a file on the fly. For gzip it’s kind of inefficient. For xz it’s very efficient provided you prepared your xz files ahead of time with a smaller than default block size.
Let’s use nbdkit to loopback mount an xz file:
$ nbdkit -fv xz file=/var/tmp/fedora-28.img.xz
# nbd-client -b 512 localhost /dev/nbd0 Warning: the oldstyle protocol is no longer supported. This method now uses the newstyle protocol with a default export Negotiation: ..size = 6144MB Connected /dev/nbd0 # ls /dev/nbd0p* /dev/nbd0p1 /dev/nbd0p2 /dev/nbd0p3 /dev/nbd0p4 # fdisk -l /dev/nbd0 Device Start End Sectors Size Type /dev/nbd0p1 2048 4095 2048 1M BIOS boot /dev/nbd0p2 4096 2101247 2097152 1G Linux filesystem /dev/nbd0p3 2101248 3360767 1259520 615M Linux swap /dev/nbd0p4 3360768 12580863 9220096 4.4G Linux filesystem # mount -o ro /dev/nbd0p4 /mnt
Of course it’s read-only. To write to a compressed file would involve changing the size of inner parts of the file. Use qcow2 compression if you want a writable compressed file (although writes to that format are not compressed).
Also loopback mounting in general is unsafe. Use libguestfs to safely mount untrusted disk images.
 These should really be filters, not plugins, so that you can chain an uncompression filter into an existing plugin, and one day I’ll get around to writing that.