Fedora/RISC-V is finished!

Ha ha only joking. However when we started out on building Fedora on the free RISC-V architecture, the goal we decided on was to get every package in the Fedora @Core group built.

I’m happy to announce that we have done that. Almost.

There are two mandatory packages that we’re not building, dracut and plymouth. Luckily neither are relevant to RISC-V at the moment since we’re not using an initramfs and there is no graphical boot device.

Another milestone is we have built more than 5,000 Fedora packages. Fedora has about 18,400 packages in total, so that’s a respectable chunk.

Here is what Fedora/RISC-V looks like when it is booting in QEMU:


Props to Stefan O’Rear, David Abdurachmanov for doing most of the real work.


Filed under Uncategorized

12 responses to “Fedora/RISC-V is finished!

  1. Really awesome work…

    How is the progress on getting this running from an FPGA development board?

    • rich

      I probably should get it working on the FPGA. There are several problems (not least the fact that FPGAs are really slow compared to QEMU on an Intel Xeon):

      FPGA has only 128MB of RAM, which is very tight to run systemd etc. It could probably boot with a custom init.

      The htif (block device emulation) on the FPGA is incredibly flaky. This isn’t really surprising since it redirects every I/O request to a file on DOS-formatted SD card. Rewriting htif to read/write from a partition on the SD card would be an improvement, and also breaks the 2GB filesystem limit.

  2. Pingback: Pengembang Berhasil Bangun Paket Grup Fedora @Core untuk RISC-V

  3. Which FPGA are you running on? FPGA boards with SODIMM slots will generally support 4GB to 8GB of DRAM. That is still not a lot for a build machine, but it’s much better than 128MB. This would work with ZC706, for example.

    I have implemented a replacement for HTIF block devices that may be helpful.

    • rich

      The ZC706 is over £2000 ex tax, so probably I’m not going to buy that any time soon. If you can suggest a board at a more reasonable price which can take a reasonable amount of RAM, I’m very interested. (Nexys Video, which is one step up from the Nexys 4 which I have already, has 512 MB of RAM and costs £400 ex VAT).

      Do you have a link to your HTIF replacement?

      • ZC706 is quite expensive. I looked at the cheaper FPGA boards I know about and it looks like you get more DRAM for your money with a Zynq board. The advantage of a Zynq board is that there are all those hard peripherals available. Could we put the CPUs into reset after they program the logic? For example a Z-Turn xcz020 board is $120 and has 1GB of DRAM.

        I’ve integrated AXI peripherals into MIT and Bluespec RISC-V designs. It shouldn’t be too hard to do so for other RISC-V designs. It helps to use device tree in the kernel because all the drivers expect it.

      • rich

        I don’t really understand how the tethered system works. Can the RISC-V processor on the FPGA read and write directly to RAM, or are the ARM processors busy waiting translating memory requests?

      • I don’t know how virtio works.🙂 Is there a good description somewhere describing how virtio works?

        I used the approach I use when developing hardware accelerators.

      • rich

        The virtio 1.0 spec: https://docs.oasis-open.org/virtio/virtio/v1.0/csprd01/virtio-v1.0-csprd01.html

        The Linux drivers and qemu devices are another source.

  4. To answer the second question about replacing HTIF. I’m the founder and main developer of http://www.connectal.org, a framework for connecting software to hardware accelerators.

    For a hosted RISC-V design, I defined request and response interfaces for a block device:
    interface BlockDevRequest;
    method Action transfer(BlockDevOp op, Bit#(32) dramaddr, Bit#(32) offset, Bit#(32) size, Bit#(32) tag);

    interface BlockDevResponse;
    method Action transferDone(Bit#(32) tag);

    I used Connectal to generate the software and hardware stubs. The hosted configuration is a bit interesting, because a request from the RISC-V core is sent to its peripheral, which sends it to software on the ARM CPU, which handles the request and responds. The hardware portion of that logic is in the same directory as the code listed above.

    The code that runs on Zynq to handle a BlockDev request is here:

    I will track down a pointer to the block device driver to use on the RISC-V CPU. It’s similar to the HTIF driver but it sends BlockDev requests.

    I believe MIT is using a variation of this code while developing their RISC-V cores.

    There is also a serial port device implemented in similar fashion.

    If this kind of interface is helpful, I can generate Verilog with AXI or other memory interfaces — one for the RISC-V and one to connect to Zynq.

    • rich

      Curious why you didn’t use virtio? I’m hand-waving quite a lot, but qemu can provide memory mapped virtio devices (ie. virtio-mmio). So a hacked qemu could run on the tethered ARM coprocessor providing the virtio devices. On the RISC-V side of course the kernel supports virtio drivers already.

  5. On a Zynq system, the programmable logic can use AXI memory requests to interface directly to any of the peripherals, so you don’t have to involve the ARM CPU in the process. There are generally two of each kind of controller, so ARM and RISC-V could each have one, but I don’t think any of the boards connect the second controllers (e.g., ethernet, SD, USB).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s