So... What would you like to see built?

I have been working hard on putting together a pipeline that builds a uclibc buildroot off the 6.6.0 branch (JH7110_VisionFive2_upstream) and works it’s way into a stable environment to build an Alpine Linux 3.19 rootfs for a while now. In fact, this is the board building the SDK:


What are some tools or OSes you would like to see built in the future?

My next goal is to fuse this kernel and the Imagination GPU driver kernel tree together, to build one “near-perfect” VisionFive2 kernel and use that as the base. No idea about Mesa just yet, but chances are I will have to build that too…

Long term, I want to provide “as close to upstream as currently possible” images and tools that others can then use to fool around with, or improve them.

The initial goal is just to get alpine:3.19 docker images… but in the process, I will pretty much end up with a Jenkins pipeline to build basically whatever. x)

So let me hear what you think, I am curious! ^^

4 Likes

@IngwiePhoenix Looks interesting :kissing:. I’m still waiting for a pkgsrc port to RISCV and would love to build a distribution based on that for the VisionFive 2 xD

:thinking:

Redox OS !!! :star_struck:

Hi - my own interest within the RISC-V ecosystem is in Vulkan on an upstream kernel. Is there any chance of that? :blush: :nerd_face:

1 Like

Little update.
I experimented with Buildroot and OpenADK - and the latter came out ontop. I am planning to add OpenSBI support into it and see if we can utilize barebox, since that package is already included, or maybe utilize u-boot directly - either way, OpenADK is the road I’ve taken. And boy is it a highway.
In conjunction, I joined the OpenADK/uClibc-ng mailing lists and uncovered TLS (Thread-Level Safety) and floating point issues. So, I have been testing glibc and musl as libc’s for the base container. In fact, I should be able to publish the base container soon - it will also include my OpenADK config and I will add my kernel config into this, just to send the full package upstream.

Meanwhile, I am working on merging the Imaginary GPU git branch and VisionFive2 branch into one monolithic repo, containing ALL the drivers for the GPU and VisionFive2 untill both of them go fully upstream. Firmware and MESA will have to be built from source and into packages. Not too difficult; but slightly annoying. I wonder if I could incorporate this into Armbian at some point - they currently lack a maintainer. With this base container, I’d have a perfect initial build environment and toolchain.

Never heared of that - will take a look at that!

Imaginary is working on that - MESA, kernel and firmware patches are all there, ready to be merged. Mesa and firmware shouldn’t be too difficult - kernel, however, might. They’re a lil stringy with high scrutiny (which is fair, considering this is the very core of everything). So, this might take a while. I kinda wish they had opted to develop them as DKIM modules… Oh well. That said: The VisionFive2 and JH7110 is almost upstream. So we can leave custom kernels behind soon; I see some back and forth between Starfive and kernel devs every once in a while. A recent reminder for patch review was issued late februrary (27th iirc?). So that is definitively ongoing.
PowerVR/Imaginary ought to be a wee bit different… no idea how strongly they are working on this - would have to check the kernel’s patchwork to see any status… or crawl the mailing list. o.o

Link? o.o

1 Like

If they accept Linux DTBs, this would be as easy as:

  • Compile the kernel
  • Load the kernel + initrd, if any
  • Load the DTB and send the address/path to the microkernel
    …or have the kernel embed the DTB. Either way, unless they accept common device tree definitions, and unless someone implements things such as clocks and other hardware in-code or creates a mapping, this’ll be difficult. For instance, for Linux, there is a lot of custom code to support the ethernet adapter, clock source, PCIe chipset and alike - let alone all the components of the SoC itself (like MMU).
    It’s certainly possible - but honestly, I ain’t that low-level :smiley:
1 Like

Oops - I think I merged “the wrong way around” xD

ingwie@bigboi:/opt/linux/unified-vf2$ nano drivers/gpu/drm/Kconfig
ingwie@bigboi:/opt/linux/unified-vf2$ nano drivers/gpu/drm/Makefile
ingwie@bigboi:/opt/linux/unified-vf2$ git diff --name-only --diff-filter=U --relative
drivers/gpu/drm/Kconfig
drivers/gpu/drm/Makefile
drivers/gpu/drm/nouveau/nouveau_drv.h

That said, just three conflicts? Damn. Trying to upload the lot to Github right now…

EDIT: Copied the wrong part of my terminal output.

Basically:

$ git init
$ git remote add starfivetech...
$ git remote add powervr...
$ git merge starfivetech/JH7110_VisionFive2_upstream
$ git merge powervr/powervr-next

Last two shoulda been swapped, since the Imaginary/powervr kernel is on 6.6.0-rc1 - whilst the VF2 upstream is on 6.6.0 flat. durp. But hey, I will try to build this merge-monster and see if I can get me a GPU output. ^^

2 Likes

Please keep us updated. I’m very interested to see how this open sourced GPU driver performs. BTW, you’ll need the corresponding firmware from

Also the user space

2 Likes

So I was mainly thinking about NetBSD itself being ported to RISCV as seen on the wiki (not 100% done yet as far as I understand) NetBSD/riscv
However the way I understand it, you could in theory bootstrap the source tree of pkgsrc on any RISCV linux and run it as a source based package management already. However I haven’t had the time to try this out and I have no idea if you’d need additional configuration adjustments in order to build everything with the right flags.
The main thing why this interests me is to utilize the VisionFive 2 fully and not just create generic RISCV64 packages with huge ineffective binaries.
Wish I had more brain and time to really dig into it and build something myself but I’m afraid that ain’t gonna happen anytime soon :sweat_smile:
There’s probably also a reason that there isn’t a current Linux distribution using this system :smiling_face_with_tear:

So, my toolchain container is ready.
https://hub.docker.com/r/ingwiephoenix/openadk/tags

This. Took. Forever. XD But hey, it’s there. I will use this to automatically build myself all the way across Alpine’s packages (aports) to hopefully regenerate the alpine:3.19 tag and FINALLY silence some of the containers and projects I am actually trying to run xD Bah, glad this is done. @.@

BSD on RISC-V… Interesting! Will check out the wiki, sounds intriguing.

Ahh… So, you mean by utilizing -mtune, -mcpu and friends with GCC for finetuning and optimization. Gotcha - makes sense. :slight_smile:

Perhaps, lol. I mean, even Gentoo offers binaries now… o.o’

1 Like

Me too! I haven’t had a HDMI output since moving away from the 5.x kernel… so I’d be happy to get this back, even if only to display some random splash screen. Once I got my containers organized and sorted, this’ll be my next goal!

I also already sent a message to Armbian asking how to best introduce those forked trees into a full build. Long-term, I’d love to build both Alpine and Armbian for the VF2 on their stable, verified and tested releases. And utilize those fork-repos for the meantime, untill they are fully upstream, which shouldn’t be taking too long.

1 Like

This powervr GPU driver should be in 6.8 already:

1 Like