Tag Archives: scsi

How many disks can you add to a (virtual) Linux machine? (contd)

In my last post I tried to see what happens when you add thousands of virtio-scsi disks to a Linux virtual machine. Above 10,000 disks the qemu command line grew too long for the host to handle. Several people pointed out that I could use the qemu -readconfig parameter to read the disks from a file. So I modified libguestfs to allow that. What will be the next limit?

18,278

Linux uses a strange scheme for naming disks which I’ve covered before on this blog. In brief, disks are named /dev/sda through /dev/sdz, then /dev/sdaa through /dev/sdzz, and after 18,278 drives we reach /dev/sdzzz. What’s special about zzz? Nothing really, but historically Linux device drivers would fail after this, although that is not a problem for modern Linux.

20,000

In any case I created a Linux guest with 20,000 drives with no problem except for the enormous boot time: It was over 12 hours at which point I killed it. Most of the time was being spent in:

-   72.62%    71.30%  qemu-system-x86  qemu-system-x86_64  [.] drive_get
   - 72.62% drive_get
      - 1.26% __irqentry_text_start
         - 1.23% smp_apic_timer_interrupt
            - 1.00% local_apic_timer_interrupt
               - 1.00% hrtimer_interrupt
                  - 0.82% __hrtimer_run_queues
                       0.53% tick_sched_timer

Drives are stored inside qemu on a linked list, and the drive_get function iterates over this linked list, so of course everything is extremely slow when this list grows long.

QEMU bug filed: https://bugs.launchpad.net/qemu/+bug/1686980

Edit: Dan Berrange posted a hack which gets me past this problem, so now I can add 20,000 disks.

The guest boots fine, albeit taking about 30 minutes (and udev hasn’t completed device node creation in that time, it’s still going on in the background).

><rescue> ls -l /dev/sd[Tab]
Display all 20001 possibilities? (y or n)
><rescue> mount
/dev/sdacog on / type ext2 (rw,noatime,block_validity,barrier,user_xattr,acl)

As you can see the modern Linux kernel and userspace handles “four letter” drive names like a champ.

Over 30,000

I managed to create a guest with 30,000 drives. I had to give the guest 50 GB (yes, not a mistake) of RAM to get this far. With less RAM, disk probing fails with:

scsi_alloc_sdev: Allocation failure during SCSI scanning, some SCSI devices might not be configured

I’d seen SCSI probing run out of memory before, and I made a back-of-the-envelope calculation that each disk consumed 200 KB of RAM. However that cannot be correct — there must be a non-linear relationship between number of disks and RAM used by the kernel.

Because my development machine simply doesn’t have enough RAM to go further, I wasn’t able to add more than 30,000 drives, so that’s where we have to end this little experiment, at least for the time being.

><rescue> ls -l /dev/sd???? | tail
brw------- 1 root root  66, 30064 Apr 28 19:35 /dev/sdarin
brw------- 1 root root  66, 30080 Apr 28 19:35 /dev/sdario
brw------- 1 root root  66, 30096 Apr 28 19:35 /dev/sdarip
brw------- 1 root root  66, 30112 Apr 28 19:35 /dev/sdariq
brw------- 1 root root  66, 30128 Apr 28 19:35 /dev/sdarir
brw------- 1 root root  66, 30144 Apr 28 19:35 /dev/sdaris
brw------- 1 root root  66, 30160 Apr 28 19:35 /dev/sdarit
brw------- 1 root root  66, 30176 Apr 28 19:24 /dev/sdariu
brw------- 1 root root  66, 30192 Apr 28 19:22 /dev/sdariv
brw------- 1 root root  67, 29952 Apr 28 19:35 /dev/sdariw
Advertisements

3 Comments

Filed under Uncategorized

How many disks can you add to a (virtual) Linux machine?

><rescue> ls -l /dev/sd[tab]
Display all 4001 possibilities? (y or n)

Just how many virtual hard drives is it practical to add to a Linux VM using qemu/KVM? I tried to find out. I started by modifying virt-rescue to raise the limit on the number of scratch disks that can be added¹: virt-rescue --scratch=4000

I hit some interesting limits in our toolchain along the way.

256

256 is the maximum number of virtio-scsi disks in unpatched virt-rescue / libguestfs. A single virtio-scsi controller supports 256 targets, with up to 16384 SCSI logical units (LUNs) per target. We were assigning one disk per target, and giving them all unit number 0, so of course we couldn’t add more than 256 drives, but virtio-scsi supports very many more. In theory each virtio-scsi controller could support 256 x 16,384 = 4,194,304 drives. You can even add more than one controller to a guest.

About 490-500

At around 490-500 disks, any monitoring tools which are using libvirt to collect disk statistics from your VMs will crash (https://bugzilla.redhat.com/show_bug.cgi?id=1440683).

About 1000

qemu uses one file descriptor per disk (maybe two per disk if you are using ioeventfd). qemu quickly hits the default open file limit of 1024 (ulimit -n). You can raise this to something much larger by creating this file:

$ cat /etc/security/limits.d/99-local.conf
# So we can run qemu with many disks.
rjones - nofile 65536

It’s called /etc/security for a reason, so you should be careful adjusting settings here except on test machines.

About 4000

The Linux guest kernel uses quite a lot of memory simply enumerating each SCSI drive. My default guest had 512 MB of RAM (no swap), and ran out of memory and panicked when I tried to add 4000 disks. The solution was to increase guest RAM to 8 GB for the remainder of the test.

Booting with 4000 disks took 10 minutes² and free shows about a gigabyte of memory disappears:

><rescue> free -m
              total        used        free      shared  buff/cache   available
Mem:           7964         104        6945          15         914        7038
Swap:             0           0           0

What was also surprising is that increasing the number of virtual CPUs from 1 to 16 made no difference to the boot time (in fact it was a bit slower). So even though SCSI LUN probing is not deterministic, it appears that it is not running in parallel either.

About 8000

If you’re using libvirt to manage the guest, it will fail at around 8000 disks because the XML document describing the guest is too large to transfer over libvirt’s internal client to daemon connection (https://bugzilla.redhat.com/show_bug.cgi?id=1443066). For the remainder of the test I instructed virt-rescue to run qemu directly.

My guest with 8000 disks took 77 minutes to boot. About 1.9 GB of RAM was missing, and my ballpark estimate is that each extra drive takes about 200KB of kernel memory.

Between 10,000 and 11,000

We pass the list of drives to qemu on the command line, with each disk taking perhaps 180 bytes to express. Somewhere between 10,000 and 11,000 disks, this long command line fails with:

qemu-system-x86_64: Argument list too long

To be continued …

So that’s the end of my testing, for now. I managed to create a guest with 10,000 drives, but I was hoping to explore what happens when you add more than 18278 drives since some parts of the kernel or userspace stack may not be quite ready for that.

Continue to part 2 …

Notes

¹That command will not work with the virt-rescue program found in most Linux distros. I have had to patch it extensively and those patches aren’t yet upstream.

²Note that the uptime command within the guest is not an accurate way to measure the boot time when dealing with large numbers of disks, because it doesn’t include the time taken by the BIOS which has to scan the disks too. To measure boot times, use the wallclock time from launching qemu.

Thanks: Paolo Bonzini

Edit: 2015 KVM Forum talk about KVM’s limits.

9 Comments

Filed under Uncategorized

New in libguestfs: Allow cache mode to be selected

libguestfs used to default to the qemu cache=none caching mode. By allowing other qemu caching modes to be used, you can get a considerable speed up, up to 25% in the best case.

qemu has a pretty confusing range of caching modes. It’s possible to select writeback (the default), writethrough, none, directsync or unsafe. Know what all those mean? Congratulations, you are Kevin Wolf.

It helps to understand there are many places where your data could be cached. Some are: in the guest’s RAM, in the host’s RAM, or in the small cache which is part of the circuitry on the disk drive. And there’s a conflict of interests too. For your precious novel, you’d hope that when you hit “Save” (and even before) it has actually been written in magnetic patterns on the spinning disk. But the vast majority of data that is written (eg. during a large compile) is not that important, can easily be reconstructed, and is best left in RAM for speedy access.

Qemu emulates disk drives that in some way resemble the real thing, and disk drives have a range of ways that the operating system tells the drive the importance of data and when it needs to be persisted. For example a guest using a (real or emulated) SCSI drive sets the Force Unit Access bit to force the data to be written to physical media before being acknowledged. Unfortunately not all guests know about the FUA flag or its equivalents, and even those that do sometimes don’t issue these flush commands in the right place. For example in very old RHEL, filesystems would issue the flush commands correctly, but if the filesystem used LVM underneath, LVM would not pass these commands through to the hardware.

So now to the qemu cache modes:

  • writeback lets writes be cached in the host cache. This is only safe provided the guest issues flush commands correctly (which translate into fdatasync on the host). If your guest is old and/or doesn’t do this, then you need:
  • writethrough is the same as writeback, but all writes are flushed to disk as well because qemu can never know when a write is important. This is of course dead slow.
  • none uses O_DIRECT on the host side, avoiding the host cache at all. This is not especially useful, and in particular it means you’re not using the potentially large host RAM to cache things.
  • directsync is like none + writethrough (and like writethrough it’s only applicable if your guest doesn’t send flush commands properly).
  • Finally unsafe is the bad boy of caching modes. Flush commands are ignored. That’s fast, but your data is toast if the machine crashes, even if you thought you did a sync.

Libguestfs ≥ 1.23.20 does not offer all this choice. For a start, libguestfs always uses a brand new kernel, so we can assume that flush commands work, and that means we can ignore writethrough and directsync.

none (ie. O_DIRECT) which is what libguestfs has always used up to now has proven to be an endless source of pain particularly for people who are already suffering under tmpfs. It also has the big disadvantage of bypassing the host RAM for caching.

That leaves writeback and unsafe which are the two choices of cachemode that you now have when using the libguestfs add_drive API. writeback is the new, safe default. Flush commands are obeyed so as long as you’re using a journalled filesystem or issue guestfs_sync calls your data will be safe. And there’s a small performance benefit because we are using the host RAM to cache writes.

cachemode=unsafe is the dangerous new choice you have. For scratch disks, testing, temporary disks, basically anything where you either don’t care about the data, or can easily reconstruct the data, this will give you a nice performance boost (I measured about 25%).

3 Comments

Filed under Uncategorized

virtio-scsi in libguestfs

Today I posted patches for virtio-scsi support in libguestfs.

Virtio-scsi is a replacement for the virtio block device (virtio-blk) and it benefits from lessons learned using virtio-blk.

The main problem with virtio-blk is that a whole PCI device is required for each virtual hard disk. PCI devices are limited in practice to 32, with several being used for other purposes like network cards. There are ways to get around this, but they’re all pretty ugly, such as requiring multiple PCI buses or using the PCI multifunction feature. SCSI on the other hand has long provided support for multiple host adapters and LUNs.

With libguestfs support for virtio-scsi, we get a couple of highly desirable features for free: being able to add a much larger number of disks (instead of being limited to ~29), and being able to easily use “TRIM” support for discarding blocks.

In future it’ll make it easier to add support for hotplugging.

6 Comments

Filed under Uncategorized