Building recent toolchains on the Debian image

As there’s been some discussions of uses of the latest toolchains here, I thought it might be helpful to share my notes from building recent GCC and LLVM on the VisionFive2 in the Debian image for native compilation - this is a quick summary, but I wrote up some more detailed notes here: GCC and LLVM builds on the VisionFive 2 Debian Image – Graham Markall


To build GCC 12.2, I first had to apply a small patch because RISC-V GCC seems not to quite be set up correctly for Debian multiarch:

diff --git a/gcc/config/riscv/t-linux b/gcc/config/riscv/t-linux
index 216d2776a18..f714026b3cc 100644
--- a/gcc/config/riscv/t-linux
+++ b/gcc/config/riscv/t-linux
@@ -1,3 +1,4 @@
 # Only XLEN and ABI affect Linux multilib dir names, e.g. /lib32/ilp32d/
 MULTILIB_DIRNAMES := $(patsubst rv32%,lib32,$(patsubst rv64%,lib64,$(MULTILIB_DIRNAMES)))
 MULTILIB_OSDIRNAMES := $(patsubst lib%,../lib%,$(MULTILIB_DIRNAMES))
+MULTIARCH_DIRNAME = $(call if_multiarch,riscv64-linux-gnu)

I wasn’t sure how to add source repos in the Debian image, so I manually installed everything (and a bit more) that might be needed to build it:

sudo apt install \
  libc6-dev m4 libtool gawk lzma xz-utils patchutils \
  gettext texinfo locales-all sharutils procps \
  dejagnu coreutils chrpath lsb-release time pkg-config \
  libgc-dev libmpfr-dev libgmp-dev libmpc-dev \
  flex yacc bison

Then to build and install (I have the 8GB RAM version, I don’t know if 4 threads is too many if you only have 4GB):

mkdir build-gcc-12.2.0
cd build-gcc-12.2.0
../gcc/configure \
  --enable-languages=c,c++,fortran \
  --prefix=/opt/gcc/12.2.0 \
  --disable-multilib \
  --with-arch=rv64gc \
make -j4
make install

I only enabled C, C++, and Fortran, but I suspect some other languages might work (and others might not :slightly_smiling_face:).

Building too about 5.5 hours - I ran make check too, and there don’t appear to be any significant issues.


Dependency installation for LLVM 15.0.6 (again probably overkill, but these are all the deps specified in the LLVM 15 source package in Debian):

sudo apt install \ 
  cmake ninja-build chrpath texinfo sharutils libelf-dev \
  libffi-dev lsb-release patchutils diffstat xz-utils \
  python3-dev libedit-dev libncurses5-dev swig \
  python3-six python3-sphinx binutils-dev libxml2-dev \
  libjsoncpp-dev pkg-config lcov procps help2man \
  zlib1g-dev libjs-mathjax python3-recommonmark \
  doxygen gfortran libpfm4-dev python3-setuptools \
  libz3-dev libcurl4-openssl-dev libgrpc++-dev \
  protobuf-compiler-grpc libprotobuf-dev \

Building and installing:

mkdir build 
cd build 
cmake ../llvm -G Ninja \
              -DCMAKE_INSTALL_PREFIX=/opt/llvm/15.0.6 \
              -DCMAKE_BUILD_TYPE=Release \
ninja -j 3
ninja install

To save time I only built the RISC-V target. I had to use 3 processes because it runs out of memory when linking with 4 processes and results in an OOM kill, even on the 8GB version.

Building LLVM took just under 4 hours. I haven’t run any tests but I’ve been using it for development purposes without bumping into any obvious issues.


Hi @gmarkall,
Can you give details of the source code you are using. I have downloaded from here - GitHub - gcc-mirror/gcc and get this error -

configure: WARNING: unrecognized options: --with-arch, --with-abi


don’t you want to include the “B” extension? I presume without verification that you could use rv64gc_zba_zbb and pick up a slight speed up.

@futurejones I was using git:// with the releases/gcc-12.2.0 branch checked out. Do you still get the warning if you use that?

Maybe - I didn’t really look deeply into how exactly to do this optimally, I was just trying to do a build that replicated the configuration of the GCC from Debian, which was configured with rv64gc only. (obtained from gcc --verbose for the default GCC in the Debian 69 image).

1 Like

Downloaded source from git:// but still get the same warning.

configure: WARNING: unrecognized options:  --with-arch, --with-abi

Make then fails with this error -

HEADERS="auto-host.h ansidecl.h" DEFINES="" \
/bin/bash ../gcc/ config.h
HEADERS="options.h insn-constants.h config/elfos.h config/gnu-user.h config/linux.h config/glibc-stdint.h config/riscv/riscv.h config/riscv/linux.h config/initfini-array.h defaults.h" DEFINES="LIBC_GLIBC=1 LIBC_UCLIBC=2 LIBC_BIONIC=3 LIBC_MUSL=4 DEFAULT_LIBC=LIBC_GLIBC ANDROID_DEFAULT=0 TARGET_DEFAULT_ASYNC_UNWIND_TABLES=1 TARGET_DEFAULT_ISA_SPEC=ISA_SPEC_CLASS_20191213 TARGET_RISCV_ATTRIBUTE=1 TARGET_RISCV_DEFAULT_ARCH=rv64imafdc_zicsr_zifencei TARGET_RISCV_DEFAULT_ABI=lp64d" \
/bin/bash ../gcc/ tm.h
HEADERS="config/riscv/riscv-protos.h config/linux-protos.h tm-preds.h" DEFINES="" \
/bin/bash ../gcc/ tm_p.h
HEADERS="auto-host.h ansidecl.h" DEFINES="" \
/bin/bash ../gcc/ bconfig.h
make: *** No rule to make target '../build-riscv64-unknown-linux-gnu/libiberty/libiberty.a', needed by 'build/genmodes'.  Stop.

I don’t have any riscv64 hardware yet so I am trying to build gcc in a docker container running on an arm64 machine.
Ubuntu have riscv64 docker images and docker uses qemu to simulate the linux/riscv64 platform.

Most things have been building and working well but I have been running into issues with multiarch in gcc and need to rebuild with --disable-multiarch and your MULTIARCH_DIRNAME patch.

Looks like I might have to wait until my VisionFive 2 board arrives before progressing any further.

I found the problem. There is typo in your configure command.
It should be ../configure ..... not ../gcc/configure .....

Everything is now configuring and building as expected.

It’s not a typo - I don’t think it’s conventional to build inside the source tree, so the assumption was that you’re starting from the parent dir of the repo and creating the build tree side-by-side with the source repo. I appreciate that this isn’t obvious from the directions though.

Hello together,

herer are my configure command on VF2:

damian@starfive:~/data/gccbuild_1$ gcc -v
Using built-in specs.
Target: riscv64-linux-gnu
Configured with: …/gcc/configure -v --prefix=/usr --enable-checking=release --enable-shared --libdir=/usr/lib --with-gcc-major-version-only --program-suffix=-12 --disable-multilib --enable-languages=c,c++,fortran,lto,objc,obj-c++ --build=riscv64-linux-gnu --host=riscv64-linux-gnu --target=riscv64-linux-gnu :
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 12.2.1 20230305 (GCC)

The build runs successfully



Ah, enabling LTO is also a good idea for better performance optimisation opportunities - thanks for sharing your config!

Thanks for the guide, I also successfully compiled GCC 12 and LLVM 15 on my 4GB RAM version board. I created an 8GB swapfile and was able to compile LLVM with 4 processes, and the actual elapsed time was even a little faster than yours :slight_smile:

real    215m1.116s
user    828m14.103s
sys     26m6.832s

$ /opt/llvm15/bin/llc --version
  LLVM version 15.0.7
  Optimized build with assertions.
  Default target: riscv64-unknown-linux-gnu
  Host CPU: sifive-u74

  Registered Targets:
    riscv32 - 32-bit RISC-V
    riscv64 - 64-bit RISC-V
real    329m30.679s
user    1072m31.180s
sys     28m50.800s

$ /opt/gcc12/bin/gcc-12 --version
gcc-12 (GCC) 12.2.1 20230307
1 Like

The swap file is a good idea - will do that next time, thanks!

You probably want to swap only to NVMe or maybe even NFS (can you do that) and not an SD card. For a few builds, it’s probably not tragic, but swap tends to chew through write cycles on SD cards in a way that wear-leveling can’t fully compensate for. If you have a system living in swap on an SD card, at least use one of the high-endurance cards. Otherwise, they just tend to flatline after a few months, taking your $HOME and whatever else was on them to the great bit-bucket in the sky.

Given the disparity between sys (the time your CPUs were doing CPU stuff - which, in a compile, should be most of them) and real (the clock time it took to do it) it looks like you were waiting on I/O a LOT. I almost wonder if you’d be better off throttling down the number of CPUs that are building so that precious RAM isn’t highly contested. If you’re bumping into the page device for more than a link, you should probably experiment with that (if you actually care. :slight_smile: ) .

I don’t know if putting source or object on NFS would help or hurt. Even if the wire speed might be slower, the asynchronous writes may be helpful when creating and closing thousands of tiny .o files, like toolchain builds do.

I look forward to being able to type ‘apt-get upgrade g++ llvm’ (or whatever it is) and getting the (pre-tested) results in WAY under 5 hours.

It IS pretty cool to have < $100 systems that are powerful enough to fully natively build some of the biggest (non-browser) software that most of us here will touch, though. We’ve come a long way since GD32V/K210!


You can usually work out an approximate number of writes for each individual cell of an SSD from the Endurance (usually hidden in extra small print somewhere on the manufacturers website) - Total Bytes Written (TBW) divide by the size of the disk
e.g. a 1TB ssd might have a TBW of 360 TB (over 5 years), which would mean that each cell was rated for ~360 writes in total before they fail. With swap you will chew through that in no time at all. Two things that will kill ssd’s fast are tiny writes or writing lots of data, swap usually does both.

I would probably use ZRAM if my application was limited by lack of RAM more so than lack of CPU (a compressed swap device in RAM that does not require a backing swap device).


my recommendation would be zswap instead of zram which is taking a certain part of the memory (for instance 25%) to compress memory pages into and if that is not enough swaps out the least recently used compressed pages to swap. i’m using it this way on many 4gb systems which i use regularly for instance with firefox with dozens of tabs open and i usually end up with max around 50mb swap used - on x86_64 i usually get a compression ratio of about 1:5.5, on aarch64 just slightly less by using zstd resulting in extending the 4gb of ram to effectively about 6-6.5gb of ‘virtual’ ram … the advantage over zram is that things will not get oom killed if memory gets short as there is still the option to use a bit of swap for compressed pages if needed.

see also: imagebuilder/rc.local at main · hexdump0815/imagebuilder · GitHub and imagebuilder/ at main · hexdump0815/imagebuilder · GitHub

1 Like