Visionfive2 no 1Gbps eth only 100M

The ethernet links not run on 1Gbps but stay on 100M but all infos return me speed @ 1Gbps, portgroup on switch Zyxel GS1900-24E is ok.
Same problem no bonding configuration too.

ethtool -i eth0

driver: st_gmac
version: Jan_2016
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v5.15.0

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Peer Notification Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: XX:XX:XX:XX:XX:XX
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: XX:XX:XX:XX:XX:XX

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
inet 192.168.1.46/24 brd 192.168.1.255 scope global bond0
valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff permaddr XX:XX:XX:XX:XX:XX
5: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0

What is your board revision, A or B?

Could you run this command and post the result?

dmesg |grep ethernet |grep PHY

Does it say just like this?

[   20.769221] starfive-eth-plat 16030000.ethernet end0: PHY [stmmac-0:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)
[   20.867906] starfive-eth-plat 16040000.ethernet end1: PHY [stmmac-1:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)

or this?

[  132.937589] starfive-eth-plat 16030000.ethernet end0: PHY [stmmac-0:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)
[  132.958560] starfive-eth-plat 16040000.ethernet end1: PHY [stmmac-1:00] driver [YT8512B Ethernet] (irq=POLL)
1 Like

dmesg |grep ethernet |grep PHY
[ 14.239132] starfive-eth-plat 16030000.ethernet eth0: PHY [stmmac-0:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)
[ 14.344373] starfive-eth-plat 16040000.ethernet eth1: PHY [stmmac-1:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)
[ 14.564940] starfive-eth-plat 16030000.ethernet eth0: PHY [stmmac-0:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)
[ 14.772432] starfive-eth-plat 16040000.ethernet eth1: PHY [stmmac-1:00] driver [YT8531 Gigabit Ethernet] (irq=POLL)

my Board is 1.3B

From your dmesg show that it is indeed 1.3B, so both ports are 1Gbps. So what do you mean “stay on 100M”? I mean how did you measure the bandwidth?

1.check your ethernet of host if it is limited?
2.or use ethtool to set it to make sure if it can be set

i mesure the bandwith with transfert rate on filezilla used to move the files from server (visionfive2) to host my laptop gigabit eth card (connected by wire cat6 to switch)… the velocity not excedeed 12,9M/s

i used ethtool just to try setting the " 1000 duplex full autoneg on" but i not resolved…

i toke new SD card, OS fresh install only updated / upgraded, no bond, i used other ports on switch (always gigabit ports but no setted portgroup), same speed at each port on visionfive2… always 12M/s… i suppose the card is bad…

There are many bottlenecks on our VF2; the most important one is the SD card, which physically can’t operate in high-speed mode. Another bottleneck is the CPU itself; if you’re transferring a file over ssh, sftp, or scp, OpenSSL cannot use the hardware crypto acceleration.

2 Likes

the file on Visionfive2 stored on NVME mounted with fs ext4.
same slowly story with samba

That are all not vald ways to measure network performance because protocols like SSH or SMB do add a lot of overhead. Install iperf3 and measure performance with that. Just start iperf3 -s
on your VF2 and on any other computer then do:

iperf3 -c <ip of the vf2>

I have done so and here are my measurements which are quite ok:

Connecting to host 192.168.1.3, port 5201
[  5] local 192.168.1.24 port 55260 connected to 192.168.1.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   107 MBytes   896 Mbits/sec  306    274 KBytes       
[  5]   1.00-2.00   sec   112 MBytes   938 Mbits/sec   55    235 KBytes       
[  5]   2.00-3.00   sec   109 MBytes   913 Mbits/sec   88    240 KBytes       
[  5]   3.00-4.00   sec   107 MBytes   899 Mbits/sec  132    263 KBytes       
[  5]   4.00-5.00   sec   108 MBytes   903 Mbits/sec  134    222 KBytes       
[  5]   5.00-6.00   sec   109 MBytes   913 Mbits/sec   93    259 KBytes       
[  5]   6.00-7.00   sec   109 MBytes   913 Mbits/sec  129    290 KBytes       
^C[  5]   7.00-7.84   sec  92.7 MBytes   927 Mbits/sec   39    240 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-7.84   sec   853 MBytes   912 Mbits/sec  976             sender
[  5]   0.00-7.84   sec  0.00 Bytes  0.00 bits/sec                  receiver
iperf3: interrupt - the client has terminated

Why you don’t see 1000 MBit/s here you might ask…well, also TCP as a protocol has some overhead. 1000 MBit/s are the gross rate, not the net one.

2 Likes

i no interessed @ banchmark test to obtain only big numbers… i need it to use in my infrastructure @ days operetionals… but if the boards give me more limitations to utilized it… @ full speed… no samba… no filezilla… i think it no interessed me… i use this tools all days… i have this infrastructure… and this board must be integrated into this enviroment… if this integration not possible… i thik use normale motherboard with X86_64 tech.

last one… normal share by 2 hosts Win10 cabled… into my lan… shared files over 80M/s… simple, fast,
If the semplicity is losted… the project not have sanse…
… visionfive is a fantastic board pheraps… i not need it…

Well, explanation has already been given by @cwt
It is totally ok, if only a specific application is of interest to you. But I do wonder why you did then buy a development board using a new ISA in the first place.
And it is clearly advertised as a such:
https://www.starfivetech.com/en/site/boards

make VisionFive 2 the best affordable RISC-V development board

2 Likes

You’re right about that. This board isn’t intended as a replacement for any x86 mini-PC or home server, not even close to the Raspberry Pi. It has limitations in both hardware and software. However, the purpose of this board is clearly for RISC-V development. If you try building a program or rebuilding the Linux kernel on this board (with NVMe), you’ll find its performance is surprisingly good, and memory usage during the build process is also efficient.

1 Like

oh…one addition…I can not second slow transfer rates in “practical applications”. Doing an scp from my work machine to the vf2:

linux-6.5.3.tar.xz                                                                          100%  133MB  24.7MB/s   00:05

That is quite near to what my work machine does with my intel atom based home server (which would be around 28 MB/s).

You may interested in this Does the openssl/libressl already use hardware crypto engine? - #23 by cwt

1 Like

mmmm i suppose you have a problem on your atom-board… today… 2 host with celeron J1900… and debian 11 run to 82M/s… with simple samba condivision…
in the future i return on this project…hoping… in a big step…

2years ago… i used a ASUS eeebox with Intel N450 or N455 i don’t remeber it, and gigabit ethernet… and a old Debian 9… same result had today… 80M/s… but is died… 365 24/24 after 4 years… bye bye

…“Now my rsync is 2.5x faster, but still slower than 100Mbit/s”…
i share my files over 11Megabyte per second whit Visionfive2 (for cleareance… stable into 11,8-11,9 Megabyte per second)… i sature all 100Mbit/s bandiwth… and you minus than me?? with this trick/configuration?! ok…
thankyou