Improvements for the next Image release

You should try “sudo apt install openssl”
The Debian sid Version: 3.0.3-8 2022-06-22 is 7 months old. And since it is openssl 3.0.3-8 it should probably add access to additional ciphers not present in 1.1.1.

What should be reported back ? For a heavily customised latest longterm kernel, that is running in a custom buildroot that is then probably then Debian(ised) with “debootstrap” and pointed at a static point in time sid risc-v repository that is compatible with the kernel and installed baseline libraries. Because there are no “stable” risc-v Debian releases yet, there are some packages in “Testing” which is a good sign (If it is in Testing it can potentially make it into the next Debian release). But most are currently in Sid (“Unstable”) right now.

I was going to suggest that the time, for now at least, would be better spent testing and debugging drivers/crypto/starfive so that it can be pushed upstream. But it looks like it has been pushed upstream last month, just needs to land in the linux mainline kernel.

The encryption engines of JH7110 has the following features.

  • AES
    ◦ Support encryption and decryption
    ◦ Support 128-bit/192-bit/256-bit of key size
    ◦ Support ECB/CBC/CFB/OFB/CTR/CCM/GCM operation modes
    ◦ Support SCA countermeasure
    ◦ Support DMA mode
  • DES/3DES
    ◦ Support standard DES with 64(56)-bit of key size
    ◦ Support 3-DES with 128(112)-bit or 192(168)-bit of key size
    ◦ Support ECB/CBC/CFB/OFB operation modes
    ◦ Support SCA countermeasure
    ◦ Support DMA mode
  • HASH
    ◦ Support SHA0/1
    ◦ Support SH224/256/384/512
    ◦ Support HMAC_SHA0/HMC_SHA1
    ◦ Support HMAC_SHA224/256/384/512
    ◦ Support DMA mode
  • PKA (Public Key Accelerator)
    ◦ Support modular addition from 32-bit to 2048-bit with granularity of 32-bit
    ◦ Support modular subtraction from 32-bit to 2048-bit with granularity of 32-bit
    ◦ Support modular multiplication from 32-bit to 2048-bit with granularity of 32-bit
    ◦ Support modular exponentiation from 32-bit to 2048-bit with granularity of 32-bit
    ◦ Support Montgomery modular multiplication from 32-bit to 2048-bit bits with granularity of 32-bit
    ◦ Support up to 512-bit of point addition/double under prime field
    ◦ Support SCA countermeasure
  • ECC for 512-bit

I fully agree, but a intermediate option that would help bring the JH7110 closer to having more patches submitted to the upstream Linux kernel would be to supplement the entropy with hardware using the builtin TRNG.
EDIT: They already have :smile:

The TRNG module of JH7110 provides the following features.
• Ring-oscillator based entropy source
• Support LFSR based digital post process
• Support self re-seeding
• 256-bit random number generation

Technically for the Linux kernel, they are actually using the very latest longterm kernel, looking at the spacing between selected kernels in the past, a new longterm kernel should be selected very soon, it is a bit overdue but probably waiting on a really nice one. Some Linux distributions take it upon themselves to support other kernels longer and backport patches, but picking one distribution and using their LTS kernel for your baseline would not necessarily accelerate getting patches added into the mainline Linux kernel.

One thing to always keep in mind is that developers love stability, a baseline tool-set that does not change every day, a baseline environment that is constant for at least the duration of one full release phase of a project, and a baseline kernel that ideally remains constant for as long as possible. The reason being if everything is constantly changing you have literally have no idea why things are breaking that were working and should still be working, and your time is constantly being wasted going sideways. Did I make a mistake or, is this an intermittent bug, did somebody else in an external organisation change something somewhere that broke the things I was working on. And then you need to waste time trying to workout why and fix that instead of making forward progress. It would be like trying to build a skyscraper while someone else is continuously digging up and replacing sections of the foundation. Everything will still get finished eventually, but it will take a lot longer. The way to look at the current OS is as a stepping stone to get everything to a level where it can all be pushed upstream to the mainline Linux kernel and then added to all Linux distributions. There may be some blobs like for GPU until it is open sourced, which will eventually happen. The other thing is that upgrading the baseline wastes a lot of time. In my mind they are better off sticking with the latest longterm kernel, getting EVERYTHING working, debugged and fully polished only upgrading their baseline when the next longterm kernel is selected.

4 Likes