M.2 and WiFi/BT?

Hi all.

I plan to purchase 8G board and use it with M.2 slot occupied by NVM disk. Some shops placed info that WiFi/BT in this case won’t be available. Is this true? What NVM StarFive advises to use?

VF2 is M.2 M-KEY. In theory, the M.2 M-KEY module of PCIE interface can be used. The M.2 of WIFI/BT is A-KEY or E-key,so it cannot be directly connected.

No no. I just want to be sure that M.2 is fully functional on this board in terms of SSD storage. WiFi can be arranged as a USB-device, no problem with that for me.

Sorry I deleted my post, because I could not edit it!

You could use a “Wireless M.2 A+E Key Slot To M.2 M Key Wifi Bluetooth Adapter” or even a “M2 KEY Adapter Convert Card Riser” (plug the words in quotes into your search engine of choice and you should get some hits). But be careful that you do not get the opposite (M.2 key to A+E Key) of what you need by accident. Compare the connectors and where the notches are before you buy.

Now my post is after your reply (which makes no sense), but I’ll leave contents of the post above in case it helps someone else.

USB 3.0 WiFi/Bluetooth is the way to go. As for if the NVM there are no public compatibility results (yet, lots of time before they ship), there are for SD cards and USB flash and looking at the commits to the linux kernel fork audio issues are still being debugged would be the reason for the HDMI issues listed. It might be worthwhile waiting until there are some makes and models listed that are know to work before buying a NVMe M.2. :hourglass_flowing_sand:

1 Like

I think it eventually transforms into “buy and try” :slight_smile: But first I need to get V2 shipped :)))

1 Like

Intel did write the NVMe standard and added the original driver to the Linux kernel, so most devices should work (famous last words).

As I afraid earlier “buy&try” concept was true in my case :slight_smile: Netac NV3000 refuses to work. I’ve added the following options for kernel: pcie_aspm=off pci=nomsi but no luck

1 Like

My bad. I used a power bank with 5V only and whilst SBC itself works quite stable but total power isn’t enough for NVME disk. I changed it to 12V power adapter and it works flawlessly!

4 Likes

Can we put this to bed already? NVMe on VisionFive 2 works. There are no reports that it doesn’t and plenty that it does. It appears to peak out at 187 MB/s though due to being PCIe G2 x1, but my Samsung EVO works fine.

ADD: of course, you need sufficient power. I use a 5.2 V 3 A power adapter, originally for the Raspberry Pi 4.

1 Like

Wonder about 187. Shouldn’t it be 500 MB/s then? Is this read speed?

This is a measured, not a theoretical, result:

$ sudo dd bs=4k count=16384000 if=/dev/nvme0n1 of=/dev/zero status=progress
14978105344 bytes (15 GB, 14 GiB) copied, 61 s, 246 MB/s

The same Samsung EVO 970 NVMe SSD peaks out at several GB/s on a modern desktop.
EDIT: I originally used a 1 MiB blocksize, but I got a better result with 4K blocks.

1 Like

Thank you.

Wonder if dd is somehow slowing down.

Did you try something like

scp -rv src_folder user@localhost:/dest_folder

Or maybe add

oflags=sync

It can only get worse from my peak result, so I’m not going to waste my time. However, it really does look like it’s training at Gen 1 speed. Annoyingly, I can’t seem to tell which speed it actually got. I can see that it is able to link at Gen 2:

$ sudo dmesg|grep GT
[    3.633539] pci 0001:01:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0001:00:00.0 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)

~but unlike on my x86 hosts, I don’t see the actual speed in lspci:~
NVM, it’s just printed differently: LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded) so it’s Gen 2. I can’t immediately explain why we aren’t getting closer to 500 MB/s.

  sudo lspci -s 0001:01:00.0 -vv
0001:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd SSD 970 EVO Plus 1TB
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 59
	Region 0: Memory at 38000000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s (downgraded), Width x1 (downgraded)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [b0] MSI-X: Enable+ Count=33 Masked-
		Vector table: BAR=0 offset=00003000
		PBA: BAR=0 offset=00002000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [148 v1] Device Serial Number 00-00-00-00-00-00-00-00
	Capabilities: [158 v1] Power Budgeting <?>
	Capabilities: [168 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [188 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [190 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=10us
	Kernel driver in use: nvme
1 Like

Cool. Thank you. Was just expecting it’s somewhere in the mid to high four hundreds…

My tests w/ NVME:

time dd if=/dev/urandom bs=1M count=8000 status=progress of=./tempfile

8374976512 bytes (8.4 GB, 7.8 GiB) copied, 194 s, 43.2 MB/s
8000+0 records in
8000+0 records out
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 194.762 s, 43.1 MB/s

real 3m15.005s
user 0m0.070s
sys 3m8.005s

1 Like

oflag=sync (no s) made no difference, as I expected.

I’m curious does that increase or stay the exactly same if you change the blocksize.

e.g.
sudo dd bs=4k count=16384000 if=/dev/nvme0n1 of=/dev/zero status=progress

sudo dd bs=8k count=8192000 if=/dev/nvme0n1 of=/dev/zero status=progress
sudo dd bs=16k count=4096000 if=/dev/nvme0n1 of=/dev/zero status=progress
sudo dd bs=512k count=128000 if=/dev/nvme0n1 of=/dev/zero status=progress

And if you change your CPU governor from :thinking: “ondemand” :thinking: to “performance” is it still the same. (I’ve no VF2 hardware yet, so no idea if what I am guessing is right or possible) Basically you are telling the CPU cores to always always run at their maximum frequency using more power even when idle.

What happens if you use a different hart (There are 4 HARdware Threads available).

$ sudo apt install util-linux
$ sudo taskset -c 1 dd bs=4k count=16384000 if=/dev/nvme0n1 of=/dev/zero status=progress
$ sudo taskset -c 2 dd bs=4k count=16384000 if=/dev/nvme0n1 of=/dev/zero status=progress
$ sudo taskset -c 3 dd bs=4k count=16384000 if=/dev/nvme0n1 of=/dev/zero status=progress
$ sudo taskset -c 4 dd bs=4k count=16384000 if=/dev/nvme0n1 of=/dev/zero status=progress

How about two data streams from different parts of the storage in parallel using two harts ?
$ ( sudo taskset -c 1 dd bs=4k count=8192000 skip=0 if=/dev/nvme0n1 of=/dev/zero status=progress ) & ( sudo taskset -c 2 dd bs=4k count=8192000 skip=8192000 if=/dev/nvme0n1 of=/dev/zero status=progress )

One thing I’ve learned is to never assume that everything will always work exactly the way you expect it to work.

1 Like