The ethernet links not run on 1Gbps but stay on 100M but all infos return me speed @ 1Gbps, portgroup on switch Zyxel GS1900-24E is ok.
Same problem no bonding configuration too.
ethtool -i eth0
driver: st_gmac
version: Jan_2016
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Peer Notification Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: XX:XX:XX:XX:XX:XX
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: XX:XX:XX:XX:XX:XX
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
inet 192.168.1.46/24 brd 192.168.1.255 scope global bond0
valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff permaddr XX:XX:XX:XX:XX:XX
5: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
i mesure the bandwith with transfert rate on filezilla used to move the files from server (visionfive2) to host my laptop gigabit eth card (connected by wire cat6 to switch)… the velocity not excedeed 12,9M/s
i toke new SD card, OS fresh install only updated / upgraded, no bond, i used other ports on switch (always gigabit ports but no setted portgroup), same speed at each port on visionfive2… always 12M/s… i suppose the card is bad…
There are many bottlenecks on our VF2; the most important one is the SD card, which physically can’t operate in high-speed mode. Another bottleneck is the CPU itself; if you’re transferring a file over ssh, sftp, or scp, OpenSSL cannot use the hardware crypto acceleration.
That are all not vald ways to measure network performance because protocols like SSH or SMB do add a lot of overhead. Install iperf3 and measure performance with that. Just start iperf3 -s
on your VF2 and on any other computer then do:
iperf3 -c <ip of the vf2>
I have done so and here are my measurements which are quite ok:
i no interessed @ banchmark test to obtain only big numbers… i need it to use in my infrastructure @ days operetionals… but if the boards give me more limitations to utilized it… @ full speed… no samba… no filezilla… i think it no interessed me… i use this tools all days… i have this infrastructure… and this board must be integrated into this enviroment… if this integration not possible… i thik use normale motherboard with X86_64 tech.
last one… normal share by 2 hosts Win10 cabled… into my lan… shared files over 80M/s… simple, fast,
If the semplicity is losted… the project not have sanse…
… visionfive is a fantastic board pheraps… i not need it…
Well, explanation has already been given by @cwt
It is totally ok, if only a specific application is of interest to you. But I do wonder why you did then buy a development board using a new ISA in the first place.
And it is clearly advertised as a such: https://www.starfivetech.com/en/site/boards
make VisionFive 2 the best affordable RISC-V development board
You’re right about that. This board isn’t intended as a replacement for any x86 mini-PC or home server, not even close to the Raspberry Pi. It has limitations in both hardware and software. However, the purpose of this board is clearly for RISC-V development. If you try building a program or rebuilding the Linux kernel on this board (with NVMe), you’ll find its performance is surprisingly good, and memory usage during the build process is also efficient.
mmmm i suppose you have a problem on your atom-board… today… 2 host with celeron J1900… and debian 11 run to 82M/s… with simple samba condivision…
in the future i return on this project…hoping… in a big step…
2years ago… i used a ASUS eeebox with Intel N450 or N455 i don’t remeber it, and gigabit ethernet… and a old Debian 9… same result had today… 80M/s… but is died… 365 24/24 after 4 years… bye bye
…“Now my rsync is 2.5x faster, but still slower than 100Mbit/s”…
i share my files over 11Megabyte per second whit Visionfive2 (for cleareance… stable into 11,8-11,9 Megabyte per second)… i sature all 100Mbit/s bandiwth… and you minus than me?? with this trick/configuration?! ok…
thankyou