As you may know, I have made many iterations of Arch Linux images for VF2, and I always build the kernel with LLVM/Clang. Recently, I discovered that Clang on Arch and other distros, such as Fedora, are multi-targets by default. This means that you can produce a RISC-V by just using this command: clang --target=riscv64. I made some changes to my kernel repository to make it possible to build on any distro via Podman (you can try Docker if you prefer).
Great job, thank you!
But hence I’m not a big fan of docker/podman for those purposes and built my kernels for the MangoPi MQ-Pro just by setting ARCH=riscv and CROSS-COMPILE=CROSS_COMPILE=‘riscv64-linux-gnu-’, why did you choose this build environment if I may ask?
Hehe…well, my building on “bare metal” is the same kind of obsession
But that (at least for me, since I’m not very experienced with llvm/clang) does not explain why you build in docker?
I tried to replace clang/llvm in a clone of your repo (two days old) using riscv64 gcc but that died near the end of the build with “an error occured”…no further explanation or other obvious reason.
To be honest, I would like to build everything with mainline kernel 6.5.x, u-boot mainline and SBI mainline but that is a steep learning curve, so I hoped I get a stable environment with arch and your kernel but that does not work (completely) because of the DRM issue with firmware loading as described in other places here.
Or is there a solution I just did not recognize reading the forum here?
With docker or podman, I build the image from archlinux which contains the same version of clang (but for x86_64), so that after the kernel is installed in VF2, if I want to compile any out of tree module, I can use the clang onboard with no problem (same clang, same version).
That is an additional argument when building on something else than Archlinux-x86 but as you said, it makes totally sense to keep versions (nearly) identical between host and target systems, especially, when binary blobs are used (which I hate btw but there seems to be no way out currently on embedded).
As far as I understood, nearly everything necessary (at least other than the binary blob stuff) should work nowadays with SBI 1.3, u-boot 2023.7 and kernel 6.5.4 (all mainline).
But if I try to build u-boot with the default starfive_visionfive2_defconfig, it needs fw_dynamic.bin, which is not the way, VF2 expects things to be!?
Does that mean, the way VF2 boots will change/has changed?
Since there are so many forks of u-boot and SBI from different vendors, working in different ways of booting, I’m a bit lost.
And, as far as I understood, if not booting from UART, u-boot and SBI are in QSPL flash memory and will not be taken from SDCard?
Sorry for bothering you with those, perhaps silly, questions but I’m a bit lost in the woods here
I used to believe that if we set the dip switch to SD mode, the board would load SPL and u-boot from the first and second partitions on the SD card. However, in a previous release, StarFive deprecated the SD boot option. Surprisingly, they have reintroduced it in the latest release. To test it, you should place SPL and u-boot on the SD card as my build script has done and check if it loads from the SD card or not.
I’ll move the u-boot/sbi/kernel mainline story to when I better understand, what is happening with that building stuff.
I’ll close here and continue the “firmware not loading” problem in the other thread…already asked too much out of band here…sorry for that.
I just tried to (re-)install the binary packages you offer on your github page with pacman -U (the three kernel/header/soft packages and the gpu one). Works again.
I’ll build those on my VF2 again and test, whether its broken again after that.
Ok, found out what the problem was. The gpu-driver was not your repository but another AUR one.
I recompiled kernel and gpu drivers from your repo and reinstalled all four packages and the firmware is loaded again. Thank you very much!
I don’t have KDE/Plasma up and running yet but I hope that will work soon.