I’ve been looking for replacements for my HP Microservers which according to this blog are now nearly 7 years old!. Although still going (sort of) strong: one of them failed completely, and another has developed a faulty cache manifested by random 32 byte wide web server corruption (yes, it’s also my main web server …)
My virtualization cluster is also coming up to 4 years old, and while it works fine it turns out that running servers without cases isn’t such a good idea because they generate large amounts of RF interference.
So you can tell that my current computing setup is held together with string and sticky tape. Can I make a nicer system based on a pile of NUCs? I bought 1 NUC for testing:
The total cost (including tax and delivery) was £583.96 from
scan.co.uk. I also specced up a similar system with an M.2 SSD which would have been about £670. (An ideal system would have both M.2 SSD and a hard disk but that gets even more expensive.) The NUC model is NUC7i5BNH and the Wikipedia page is absolutely essential for understanding the different models.
Enough talk, how well does it work? To start off with, really badly with the NUC regularly hanging hard. This was because of a faulty RAM module, a problem I’ve had with the Gigabyte Brix before. Because of that, I’m only running with one 8 GB module:
It has two real cores with hyperthreading. The cores are Kaby-Lake Intel(R) Core(TM) i5-7260U CPU @ 2.20GHz.
The compile performance is reasonable, not great, as you’d expect from an Intel i5 processor.
Eric Blake has been doing some great stuff for nbdkit, the flexible plugin-based NBD server.
- Full parallel request handling.
You’ve always been able to tell nbdkit that your plugin can handle multiple requests in parallel from a single client, but until now that didn’t actually do anything (only parallel requests from multiple clients worked).
- An NBD forwarding plugin, so if you have another NBD server which doesn’t support a feature like encryption or new-style protocol, then you can front that server with nbdkit which does.
As well as that he’s fixed lots of small bugs with NBD compliance so hopefully we’re now much closer to the protocol spec (we always check that we interoperate with qemu’s nbd client, but it’s nice to know that we’re also complying with the spec). He also fixed a potential DoS where nbdkit would try to handle very large writes which would delay a thread in the server indefinitely.
Also this week, I wrote an nbdkit plugin for handling the weird Xen XVA file format. The whole thread is worth reading because 3 people came up with 3 unique solutions to this problem.
It has 16 real Xeon cores and insane* amounts of RAM and cache:
It also has the Aspeed AST2400 BMC so it’s possible to manage it remotely using freeipmi and (for the video console) Java.
* Insane literally — This machine has the same amount of L3 cache (40 MB) as was the size of my first hard disk.
Last week I started a new project: nbdkit. This is a toolkit for creating NBD servers. The key features are:
- Multithreaded NBD server written in C with good performance.
- Well-documented, simple plugin API with a stable ABI guarantee. Let’s you export “unconventional” block devices easily.
- Liberal license (BSD) allows nbdkit to be linked to proprietary libraries or included in proprietary code.
There are of course many NBD servers already, such as the original nbd project, qemu-nbd and jnbds.
There are also a handful of servers specialized for particular disk sources. A good example of that is this OpenStack Swift server. But you shouldn’t have to write a whole new server just to export a new disk type.
nbdkit hopefully offers a unique contribution to this field because it’s a general server with a plugin architecture, offering a stable ABI and a liberal license so you can link it to proprietary code (say hello, VDDK).
The motivation for this is to make many more data sources available to libguestfs. Especially I want to write plugins for libvirt, VDDK and some OpenStack sources.
To get access to the RHEV-M 3.0 beta, you must have an active Red Hat Enterprise Virtualization subscription. Go to this RHN page to see links to the beta channels. See this page for discussion around the beta. There is also a Webinar taking place today (18th August). Finally here is the official announcement.
I’m getting ready to install RHEV-M 3.0 beta, and that starts with buying some cheap hardware.
RHEV-M requires two physical servers, one running our minimal hypervisor RHEV-H and one running the management console. Starting with RHEV-M 3.0 the management console runs on Linux [PDF] (you can still run it on Windows if you want). The management console can be run in a VM, but it can’t unfortunately be run in a VM on top of RHEV-H because there’s a chicken-and-egg problem that the management console needs to talk to RHEV-H to instruct it to start VMs.
I’m doing this on the cheap, so the hardware I’ve ordered is not the recommended way. Performance is expected to be fairly abysmal.
I ordered two HP Proliant Microservers, and upgrades to the RAM and disks.
|2 x HP microservers
@£250 each inc tax/delivery
|2 x 1 TB Samsung HD103SJ
@£44.80 each inc tax/delivery
|2 x 8 GB RAM
@£67.99 each + £27.20 tax, delivery included
HP have extended the cashback offer on these servers through August 2011, so I should be able to claim £200 back.
The HP Proliant Microserver that I ordered arrived on Tuesday. It’s a nice system and a real bargain even for the full price of £180.
Opening the case you can see four drive caddies, each one capable of taking a standard 2 TB SATA drive. Also on the motherboard you may notice a USB socket where you can locate a boot USB key:
Some brief specs …
CPU: “AMD Athlon(tm) II Neo N36L Dual-Core Processor”
CPU flags: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a 3dnowprefetch osvw ibs skinit wdt nodeid_msr npt lbrv svm_lock nrip_save
L1 cache: 64 KB per core
L2 cache: 1024 KB per core
Memory: 1GB ECC @ 1333 MHz
Network: Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet PCIe (1 port)
I ordered a HP ProLiant MicroServer from Amazon and 4 x 2TB Samsung disks.
HP are offering £100 cashback if you get the server during this month, so including tax and delivery it’s costing me about £330 for 8 TB of raw storage (4p per GB — note that includes the server itself).
(Thanks Bryn M. Reeves for the tip off about these HP servers)
Update: Link to my old file server which will become the backup for the new one. The old server is still working well, but because it used an AMD Geode CPU it always was very slow.