K3S on VisionFive 2

Good day all,

Just before we will make the slide into 2024, let me share some experiences with running a port of K3S on RISCV64, specifically the Starfive Vision 2 and Sipeed’s Licheepi4a.

I am currently myself busy porting KIND to RISCV64. I studied the works of GitHub - CARV-ICS-FORTH/kubernetes-riscv64: Status of work on running Kubernetes on RISC-V.

This repo is currently fixed on v1.27.3+k3s1 and has not seen any activity for 5mo. I am currently checking if I can make get it floating by working on my own fork of the tree.

For those of you that would like to try:

I have tested using 4 StarFive Vision2 boards with 8GiB, latest firmware and CWT19 and CWT20 ArchLinux. Please do install/replace iptables with iptables-nft.

The instructions in the README.md of the repo can be followed, but here are my instructions in short:

  1. On each of the nodes for your K3S cluster download the K3S binary and the K3S installer script. The K3S binary has been specifically compiled for RISCV64. The K3S installer script is generic.

    In my case:

    a) vision52-n06 is the K3S server (master)
    b) vision52-n05, vision52-n14 and vision5-n15 are the K3S agents (nodes)

    Make sure you have CURL installed:

    curl -LO https://github.com/CARV-ICS-FORTH/k3s/releases/download/20230721/k3s-riscv64.gz.aa
    curl -LO https://github.com/CARV-ICS-FORTH/k3s/releases/download/20230721/k3s-riscv64.gz.ab
    curl -LO https://github.com/CARV-ICS-FORTH/k3s/releases/download/20230721/k3s-riscv64.gz.ac
    cat k3s-riscv64.gz.* | gunzip > /usr/local/bin/k3s
    chmod +x /usr/local/bin/k3s

Install

curl -sfL https://get.k3s.io > k3s-install.sh

E.g. do this on all your nodes, server (master) and agents (nodes)

  1. On the K3S server node (master)

    INSTALL_K3S_SKIP_DOWNLOAD=“true” bash -x k3s-install.sh

    This will make sure that a K3S server node is installed and that the images that get pulled are the respective RISCV64 or multiarch versions

    You can check the progress with:

    sudo journalctl -u k3s -lf
    

Once done, you can check with:

  sudo k3s kubectl get nodes 

  if your k3s-server is up-n-running.

  Now collect the TOKEN by executing:



  sudo cat /var/lib/rancher/k3s/server/token

  Record this token as your are going to need it for your K3S agents to join this K3S server.
  1. On each of your k3s-agents (nodes) execute:

    INSTALL_K3S_SKIP_DOWNLOAD=“true” K3S_URL=https://:6443 K3S_TOKEN=K::server: bash -x k3s-install.sh

    This will have this system join the k3s server node (master).

    Please replace assing K3S_TOKEN the value of the token you discovered at step 2
    Specifying the K3S_URL that points to your k3s-server (master) automatically makes this one a K3S-agent (node/worker)

On the K3S-server you can check again with:

sudo k3s kubectl get nodes

If at the end there are 3 agent nodes that have joined your k3s-server.

[pascal@vision52-n06 ~]$ sudo k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
vision52-n05 Ready 25h v1.27.3+k3s-9d376dfb-dirty
vision52-n14 Ready 7h v1.27.3+k3s-9d376dfb-dirty
vision52-n15 Ready 6h58m v1.27.3+k3s-9d376dfb-dirty
vision52-n06 Ready control-plane,master 29h v1.27.3+k3s-9d376dfb-dirt

Testing:

[pascal@vision52-n06 validate]$ helm upgrade --install tst helm-chart-containers/
Release “tst” has been upgraded. Happy Helming!
NAME: tst
LAST DEPLOYED: Sun Dec 31 15:35:07 2023
NAMESPACE: default
STATUS: deployed
REVISION: 3
NOTES:

Additional Instructions

The containers demo applications has been deployed in namespace: default.

Verify the Deployment

You can verify the deployment by running:
kubectl get deployments --namespace default tst-containers

Find Active Pods

To find the active pods, use:
kubectl get pods --namespace default -l “app.kubernetes.io/name=containers,app.kubernetes.io/instance=tst

Connect to Pod IPs

To connect to a pod, first find the pod’s IP with:
kubectl get pods --namespace default -o wide

Then connect using:
curl http://:8080

Connect to the Service

To connect to the service, use:
kubectl get svc --namespace default tst-containers
curl http://:8080

[pascal@vision52-n06 validate]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tst-containers-769fcf8cc6-hxcsw 1/1 Running 1 (124m ago) 6h50m 10.42.2.3 vision52-n14

[pascal@vision52-n06 validate]$ curl 10.42.2.3:8080

                  (((((((((                  
               .(((((((((((((((((.             
           .((((((((((((&((((((((((((.         
       /((((((((((((((((@((((((((((((((((/     
      ((((((((((((((((((@((((((((((((((((((    
     *(((((##((((((@@@@@@@@@@@((((((%#(((((*   
     (((((((@@@(@@@@#((@@@((#@@@@(@@@(((((((   
    *(((((((((@@@@(((((@@@(((((@@@@(((((((((,  
    (((((((((@@@%@@@@((@@@((@@@@%@@@(((((((((  
   .(((((((((@@((((@@@@@@@@@@@((((@@(((((((((. 
   (((((((((&@@(((((@@@(((@@@(((((@@&((((((((( 
   (((((((((&@@@@@@@@@@#(#@@@@@@@@@@&((((((((( 
  ((((((@@@@@@@@(((((@@@@@@@(((((&@@@@@@@((((((
  (((((((((((%@@((((%@@@(@@@%((((@@&(((((((((((
   ((((((((((((@@@((@@%(((%@@((@@@(((((((((((( 
     (((((((((((#@@@@%(((((&@@@@#(((((((((((   
      /(((((((((((@@@@@@@@@@@@@(((((((((((/    
        (((((((((@@(((((((((((@@(((((((((      
          (((((((&(((((((((((((&(((((((        
           /(((((((((((((((((((((((((/         
             (((((((((((((((((((((((           

This container is running in KUBERNETES (K3S) v1.27.3+k3s-9d376dfb-dirty
Cluster has been created: 29h6m3.563151875s ago.
Cluster uptime approx: 7h2m40.586427281s
in POD tst-containers-769fcf8cc6-hxcsw (10.42.2.3) on NODE vision52-n14 / riscv64

[pascal@vision52-n06 validate]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 443/TCP 29h
tst-containers ClusterIP 10.43.184.245 8080/TCP 29h

[pascal@vision52-n06 validate]$ curl 10.43.184.245:8080

                    (((((((((                  
               .(((((((((((((((((.             
           .((((((((((((&((((((((((((.         
       /((((((((((((((((@((((((((((((((((/     
      ((((((((((((((((((@((((((((((((((((((    
     *(((((##((((((@@@@@@@@@@@((((((%#(((((*   
     (((((((@@@(@@@@#((@@@((#@@@@(@@@(((((((   
    *(((((((((@@@@(((((@@@(((((@@@@(((((((((,  
    (((((((((@@@%@@@@((@@@((@@@@%@@@(((((((((  
   .(((((((((@@((((@@@@@@@@@@@((((@@(((((((((. 
   (((((((((&@@(((((@@@(((@@@(((((@@&((((((((( 
   (((((((((&@@@@@@@@@@#(#@@@@@@@@@@&((((((((( 
  ((((((@@@@@@@@(((((@@@@@@@(((((&@@@@@@@((((((
  (((((((((((%@@((((%@@@(@@@%((((@@&(((((((((((
   ((((((((((((@@@((@@%(((%@@((@@@(((((((((((( 
     (((((((((((#@@@@%(((((&@@@@#(((((((((((   
      /(((((((((((@@@@@@@@@@@@@(((((((((((/    
        (((((((((@@(((((((((((@@(((((((((      
          (((((((&(((((((((((((&(((((((        
           /(((((((((((((((((((((((((/         
             (((((((((((((((((((((((           

This container is running in KUBERNETES (K3S) v1.27.3+k3s-9d376dfb-dirty
Cluster has been created: 29h7m32.991069524s ago.
Cluster uptime approx: 7h4m10.009904103s
in POD tst-containers-769fcf8cc6-hxcsw (10.42.2.3) on NODE vision52-n14 / riscv64

If there are any questions, just let me know.

Best wishes for 2024!!!

Pascal van Dam

6 Likes

hi, can you make a tutorials on how to build k3s on visionfive?

Good morning AntoFox,

Currently busy with that. :slight_smile:

(including getting kind and RKE2 running on riscv64)

I will let you know when they are finished, ok?

Kind regards,

Pascal

2 Likes

Hello,

for those of you wanting to run some containers on k3s on riscv I have compiled some:

  • mariadb
  • phpmyadmin
  • postgresql
  • pgadmin
  • gitea
  • zabbix-server
  • zabbix-web
  • zabbix-agent

You can findthem on docker hub under my account: allardkrings/riscv64*****

Kind regards

1 Like

Nice!

Btw; the Allard Krings from B/CICT / BAC fame? Then welcome old colleague.

Kind regards,

Pascal

`hello.

now Jenkins is available too

happy hacking!

ps; Greetings to you Pascal

1 Like

Hi there,

I have installed Argo Workflows on my K3S cluster.

Instructions can be found here:

How to install Argo Workflows

Then I created a Maven image and an OpenLiberty image (both can be found on Docker Hub:

allardkrings/riscv64-maven
allardkrings/riscv64-liberty

The I created Argo Workflow that clones my Java-code from Gitea, compiles it to a WAR-file, install the WAR-file on the Open Liberty-image, builds a new image and then pushes the image to Docker Hub.

The workflowtemplate and the Maven pom.xml can be found here:

Argo workflow template and pom.xml

Happy Hacking!

3 Likes