Compiling the lowRISC software, full GCC, Linux and busybox was straightforward. You have to follow the documentation here very carefully. The only small problem I had was their environment variable script uses
$TOP which is also used by something else (bash? – not sure, anyway you just have to modify their script so it doesn’t use that name).
The above gets you enough to boot Linux on Spike, the RISC-V functional emulator. That’s not interesting to me, let’s see it running on my FPGA instead.
LowRISC again provide detailed instructions for compiling the FPGA which you have to follow carefully. I had to remove the
-version flags in a script, but otherwise it went fine.
It wasn’t clear to me what you’re supposed to do with the final bitstream (
./lowrisc-chip-imp/lowrisc-chip-imp.runs/impl_1/chip_top.new.bit) file, but in fact you use it in Vivado to program the device:
The simple hello world example was successful (output shown below is from
/dev/ttyUSB1 connected to the dev board):
The first step is to install the enormous, proprietary Xilinx Vivado software. (Yes, all FPGA stuff is proprietary and strange). You can follow the general instructions here. The install took a total of 41GB of disk space (no, that is not a mistake), and took a few hours, but is otherwise straightforward.
The difficult bit was getting the Vivado software to actually see the hardware. It turns out that Xilinx use a proprietary USB driver, and, well, long story short you have to run the
install_drivers script buried deep in the Vivado directory tree. All it does is put a couple of files under
/etc/udev/rules.d, but it didn’t do
udevadm control --reload-rules so you have to do that as well, replug the cable, and it should be detectable in Vivado:
Last year I had open source instruction set RISC-V running Linux emulated in qemu. However to really get into the architecture, and restore my very rusty FPGA skills, wouldn’t it be fun to have RISC-V working in real hardware.
The world of RISC-V is pretty confusing for outsiders. There are a bunch of affiliated companies, researchers who are producing actual silicon (nothing you can buy of course), and the affiliated(?) lowRISC project which is trying to produce a fully open source chip. I’m starting with lowRISC since they have three iterations of a design that you can install on reasonably cheap FPGA development boards like the one above. (I’m going to try to install “Untether 0.2” which is the second iteration of their FPGA design.)
There are two FPGA development kits supported by lowRISC. They are the Xilinx Artix-7-based Nexys 4 DDR, pictured above, which I bought from Digi-Key for £261.54 (that price included tax and next day delivery from the US).
There is also the KC705, but that board is over £1,300.
The main differences are speed and available RAM. The Nexys has 128MB of RAM only, which is pretty tight to run Linux. The KC705 has 1GB of RAM.
I’m also going to look at the dev kits recommended by SiFive, which start at US$150 (also based on the Xilinx Artix-7).
Gigabyte just announced a bunch of full ARM servers, with between 32 and 96 cores. They are based around the Cavium ThunderX processors that we’ve had at Red Hat for a while so they should run RHEL either out of the box or very soon after release.
The data sheet is here but in brief, quad core AMD Seattle with 8 GB of RAM (expandable to 64 GB). Approximately equivalent to the still missing AMD Cello developer board.
NBD is a protocol for accessing Block Devices (actual hard disks, and things that look like hard disks). nbdkit is a toolkit for creating NBD servers.
You can now write nbdkit plugins in Ruby.
(So in all that makes: C/C++, Perl, Python, OCaml or Ruby as your choices for nbdkit plugins)
$ ./run ./utils/boot-benchmark/boot-benchmark
Warming up the libguestfs cache ...
Running the tests ...
test version: libguestfs 1.33.28
test passes: 10
host version: Linux moo.home.annexia.org 4.4.4-301.fc23.x86_64 #1 SMP Fri Mar 4 17:42:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
host CPU: Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
backend: direct [to change set $LIBGUESTFS_BACKEND]
qemu: /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 [to change set $LIBGUESTFS_HV]
qemu version: QEMU emulator version 2.5.94, Copyright (c) 2003-2008 Fabrice Bellard
smp: 1 [to change use --smp option]
memsize: 500 [to change use --memsize option]
append: [to change use --append option]
Result: 575.9ms ±5.3ms
There are various tricks here:
- I’m using the (still!) not upstream qemu DMA patches.
- I’ve compiled my own very minimal guest Linux kernel.
- I’m using my nearly upstream
"crypto: Add a flag allowing the self-tests to be disabled at runtime." patch.
- I’ve got two sets of non-upstream libguestfs patches 1, 2
- I am not using libvirt, but if you do want to use libvirt, make sure you use the very latest version since it contains an important performance patch.