What kind of SSD can be used?

Following @ccharon’s tip, I too bought a 256 GB Transcend 110S for £20 and I’m very happy with it, its about 60 MB/s faster than my Samsung 980 PRO on the VF2:

[root@ArchVF2 user]# uname -a
Linux ArchVF2 5.15.2-cwt13 #2 SMP PREEMPT Thu Jun 1 14:42:43 +07 2023 riscv64 GNU/Linux
[root@ArchVF2 user]# hdparm -t /dev/nvme0n1

 Timing buffered disk reads: 650 MB in  3.01 seconds = 216.09 MB/sec

Please add the OSCOO ON900 m.2 pcie Gen3 nvme to your list. It’s been behaving well for me.

1 Like

Could everyone reporting working drives also include your uname -a output and the output of hdparm -t /dev/nvme0n1 please so we can find out which drives offer the best performance on the VF2?


I have been using a Patriot P300 M.2 PCIe Gen 3 x4 256GB. Working well and very good price of around $18.

uname -a
Linux visionfive2-sid 5.15.0-starfive #1 SMP Sun Mar 26 12:29:48 EDT 2023 riscv64 GNU/Linux

hdparm --direct -t /dev/nvme0n1
 Timing O_DIRECT disk reads: 588 MB in  3.00 seconds = 195.97 MB/sec

I can get the Samsung 970 EVO Plus M.2 NVMe SSD (MZ-V7S1T0BW), 1 TB, PCIe 3.0 at Amazon in Germany for €54. Samsung probably has 1 TB, PCIe 3.0 SSDs on the shelves like ribbon noodles in the kitchen and needs to sell them quickly.

1 Like
root@starfive:~# neofetch ; uname -a; hdparm -t /dev/nvme0n1; nvme list; hdparm -t /dev/mmcblk1

OS: Debian GNU/Linux bookworm/sid riscv64
Host: StarFive VisionFive V2
Kernel: 5.15.0-starfive
Uptime: 19 mins
Packages: 1136 (dpkg)
Shell: bash 5.2.2
CPU: (4) @ 1.500GHz
Memory: 309MiB / 7925MiB

Linux starfive 5.15.0-starfive #1 SMP Sun Jun 11 07:48:39 UTC 2023 riscv64 GNU/Linux

Timing buffered disk reads: 542 MB in  3.00 seconds = 180.52 MB/sec

Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            PHOC310---------     INTEL SSDPEK1A118GA                      1         118.41  GB / 118.41  GB    512   B +  0 B   U5110550

Timing buffered disk reads:  68 MB in  3.06 seconds =  22.25 MB/sec

SD (boot drive) is a SanDisk 256GB Extreme PRO microSD UHS-I.

Playing around with my board tonight I noticed the current CPU frequency can have a significant effect on the results of an hdparm read test.

At first I was telling the CPU to pay attention with cat /dev/urandom | xz > /dev/null but then decided to poke it properly by changing the governor.

As you can see from the screenshot, changing to the performance governor and talking to the device directly results in nearly doubling the read speed from 186MB/s to 360MB/s. Just waking the system up for the test increased it by over 30%.

So whenever comparing 2 SSDs with hdparm, it’s important to note the CPU freq at the time the test was performed, either by checking it explicitly or inferring from the governor and other load on the system. Or just poke it first to make sure it’s maxed out for the highest “score” from any drive.

Drive used in my test was an Intel Optane P1600X 118GB.


I just built a new image and wanted to test it without destroying the existing data on my NVMe. So, I swapped it out (Hikvision E3000 256GB) and replaced it with the XPG SX8200 Pro 256GB. However, I encountered a lot of errors on the serial console:

[    2.376375] pcie_plda 2c000000.pcie: plda_pcie_handle_intx_irq unexpected IRQ, INT0

The weird part is that if I boot the board from SD first, the NVMe seems to work just fine. I tried using my previous kernel parameters from NVMe I/O timeouts, but it didn’t help at all. I even tried using the most powerful USB-PD adapter that I have, but it didn’t help either.

1 Like

Samsung 970 Pro 1TB.
sudo hdparm --direct -t /dev/nvme0n1
Timing O_DIRECT disk reads: 980 MB in 3.00 seconds = 326.19 MB/sec
That is without changing cpu governor.


Oh wow!

That’s easily the fastest speed NVMe read speed I’ve seen anyone report on the VF2, thats closer to what I was expecting to get. Which OS and kernel version are you using?

1 Like

For this test it was standard debian image with light desktop 202306.


Thanks for confirming that! It sounds like we’ve got a new drive to beat. Shame the Samsung 980 Pro performs quite poorly with the VF2 which is why this result surprised me so much.

1 Like

For some reason you can’t rely on SSD performance in typical PC as a guide for performance in VF2. For example I have WD Black 850 which is great performer in a PC but not so great in VF2.


The 980 is a PCI 4.0 SSD so likely requires more power and lanes etc than the VF2 can provide. It doesn’t make sense to buy a PCI 4.0+ SSD just for use on the VF2 but I was wanting to use it on the PC too but was disappointed it performed so poorly on the VF2, about 160 MB/s. I bought a much cheaper SSD that performs much better but still only manages 230 MB/s.

Such large differences in performance between the different makes and models!

1 Like

980 Pro performs great in a typical PC, so long it gets 4 lanes and maybe (just maybe) some cooling. Apparently NAND chips actually like high temperature, the problem is a controller.

1 Like

The PCIe bus in the JH7110 has a rate of 2.5 or 5.0 GT/s per lane. One PCIe 2.0 1x lane is allocated to the VL805 USB 3.0 Host controller, and the other PCIe 2.0 1x lane is allocated to the M.2 M-Key SSD socket.

A single PCIe 2.0 1x lane has a maximum signalling throughput of 500 MB/second but there is a 8b/10b encoding overhead (every 8 bits sent across the bus is encoded as 10 bits), so with a 20% overhead the maximum theoretical throughput should be 400MB/second (~381.5 MiB/second).

326.19 MB/sec disk read speed is ~82% of the maximum theoretical throughput.

EDIT: Bad news: “This product has been discontinued.


USB-C supply is dependent on the cable.
Mine has a built in cable which is quite large diameter. My 65W supply has a monster cable.
The 45W switches straight to 12V.
The problem will be current spikes and the dreaded VD.
Would need to review the track work on the PCB.
I’ve bought a cheap assed KIOXIA 256GB SSD and it works great. As mention above the PCIe bus is a bit weak, so no need for an amazing SSD.

I agree with you that basically any SSD will do the job well with this board. But Samsung 970 Pro is not amazing at all. It’s at least two generations old and it will lose to any more current SSD -->in a PC<–. However for reasons unknown it performs better than these modern SSDs on this board. Of cause I did not buy it for this project. I have a bunch of old SSDs from years of building and upgrading computers. A bunch of them happens to be 970 Pro.


This is not how you should compare both technologies. An SD card is not meant for intensive read/writes anyway. It will wear out much more quickly. If you have a use case like video recording/processing or databases, you shouldn’t rely on an SD card for this, how fast it can be. It will be destroyed fairly quickly …
This is why having NVME is so important for many applications …


I have had some failed uSD cards.
My preference is for simplicity, fastest drive, nvme has root, swap, home partitions.
A large compile bogs down on the JH7110. Next I’m compiling a kernel I’ll log temperature (with fan/heatsink).

1 Like