It’s all about memory

The thing that stops me from running lots and lots of virtual machines is the amount of RAM I can fit into a server.

My current build server runs 9 VMs taking just over 7 GB in total. The host has 8 GB of RAM.

The 4 cores on the host are virtually idle. Certainly CPU-wise there is nothing stopping us running 16 or possibly even 32 VMs.

RAM is the problem. The server can take 16 GB maximum, at a cost (for cheap desktop RAM) of only about $400, and I get the original 8 GB back to recycle somewhere else. Unfortunately the larger RAM would be slower. For most consumer hardware 8 or 16 GB seems to be the limit anyway.

Ulrich Drepper has a very good analysis of DDR RAM, why larger sizes are necessarily slower, and why you can’t design single socket systems that take arbitrarily huge amounts of RAM.

I’m left doing workarounds: reduce the amount of RAM assigned to each guest, keep guests paused or powered off when not directly in use, move guests between servers, … Not ideal.



Filed under Uncategorized

7 responses to “It’s all about memory

  1. Yaniv

    I always think I/O is the real bottleneck.
    Memory issues can be compensated by using KSM, ballooning and/or zcache.
    I/O is a bummer.

    • rich

      Yes, ‘course it definitely depends on what you’re trying to do.

      For this particular case, I have multiple RHEL, Debian and Ubuntu VMs that are always on, mostly idle, and ready for me to ssh into them to either try to see if a build works or find out some detail about how things work on that distro. I’d like to have more of them too in future; these things are very useful.

      They are usually idle, or in use one at a time, so I/O isn’t the issue (for my case). Because they can’t easily be powered off (that would negate the ability to just ssh into them instantly), they consume memory. I could pause them, but pausing VMs is not well integrated into our workflow (how to pause them automatically, yet have them unpause when I ssh in?). And of course we don’t yet support suspending VMs …

  2. A lot of triple-channel DDR3 motherboards have 6 slots and can therefore take 24GB if you fill it with 4GB sticks. The motherboards seem to have gone out of fashion now but this is what I have for my Intel i7 920. I’d be shocked if they weren’t available for more modern CPUs.

    That loadout it would suffer from the same speed-density issues that you’re talking about but what is the actual cost of that speed hit? IME you’re not dealing with anything that is noticeable and the benefit of extra RAM outweighs the human cost of struggling along without enough RAM.

    You can also offset the difference by getting better DDR3 (eg you were on 2GB sticks at 1066, so buy 4GB sticks at 1600 – it’ll be faster).

  3. Gary Scarborough

    I have pretty much come to the conclusion that 16 GB hosts is the way to go right now for home users. For about $600US I can build a box with 16GB ram and a quad core i5. I reuse an old drive for booting and use a file server for the main VM storage. With some thought the host can be made almost silent. The last time I built an actual server for home, the amount of money for the “server” grade ram made it very costly.

  4. Mattias Eliasson

    I don’t really agree on this. It is true that with current designs more RAM means less speed. However that is only true in a design where you expect equal access to all ram.
    What we could do is add more ram that has a slower access speed then the first 16 gigabytes.
    We already have several layers of cache inside the processor, so this is not a brand new idea. Having another level of cache outside the processor is not really a new idea either, that used to be called “L2 cache” when we only had one level of cache inside the CPU.

    Another design that is quite popular is utilize virtual memory on high speed secondary storage. There are PCI Express RAM-disks that for quite thick bunch of dollars gives you very fast paging, and if you cant afford this you could use a cheap flash-based SSD to give you quite a boost.

    Naturally a very fat memory bus would be faster, but how much faster? Usually you don’t really randomly access 256 GB of ram, there are a few algorithms that would generate a lot fo cache misses if there was only say 4GB of fast external cache for 256 GB of ram, they are however very few.

    In most cases i think using a few gigabytes RAM and a cheap SSD for paging gives more performance then adding more RAM to the motherboard.

    Then of course we have the problem that virtual machines currently map VM memory to physical and not virtual memory. If they mapped to virtual memory and therefor used the paging mechanism of the host OS, we could utilize more memory without adding more to the CPU memory bus.

    • rich

      In KVM, VM memory is just regular malloc’d memory. It can be paged out just fine. Edit: although the problem is still that the guest itself is not expecting what it thinks is “RAM” to be paged out, so in practice you get all sorts of bad effects.

      • Mattias Eliasson

        I mostly use VirtualBox that hogs physical memory.
        AFAIK Linux works well with NUMA so it should be possible to make it work well with non-uniform local memory latency as well. Perhaps standard paging where the guest have no information on latency is the problem, i am guessing NUMA allows more control. In that case NUMA emulation may be a way to give the guest OS more control?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.