So if the aim of your kernel is to create a fast to build, minimal kernel for testing why include extra things which you (the engineers) aren’t going to care about.
Think about it this way, you are producing cars to test impacts and crumble zones in high speed collisions. Do you think they would put a full expensive metallic paint job on it. No that’s a waste of time and money
I agree; this board wasn’t originally designed to be a router or firewall (otherwise, it would have more than 2 ethernet ports). The more features you enable in the kernel configuration, the longer the compilation time becomes. So, if they can provide a minimal working kernel with all or most drivers for the hardware, why not simply offer that and allow us developers to rebuild or customize it according to our individual needs?
I very much agree with your analysis, but being pedantic…
Many firewalls just have two ports, it’s what perimeter (edge) firewalls are all about, firewalls built into routers are just a consumer thing, professional networks tend to do it before the router.
With only two network ports you can shuffle packets about to increase throughput ( e.g. https://www.benzedrine.ch/ackpri.html - scroll down to before/after throughput graph - nearly a perfect example of the the old adage “a picture is worth a thousand words”).
But if everyone on the entire planet prioritised empty TCP ACK packets and all internet pipes were fully saturated, doing so would not improve throughput. But the links between sites and countries in the Internet are not under full load at all times, and hardly anyone moves TCP ACK’s to the front of their packet queues so it does work and works really well.
I agree with both of you. Somehow in the past I configured a firewall with only 2 ethernet ports, let say WAN and LAN, there were several times that I blocked myself from the firewall with no other option to fixed unless I physically connected a console or monitor and keyboard to the firewall. (may by it just my stupidity at young age!)
It’s called port sharing, or bonding. I have configured many systems to do it over many years. But these were typically rack servers. (*)
I’m not entirely sure that the VF2, with 2x gigabit interfaces actually has enough performance to hit the limit, maybe useful for a NAS or abhigh-availability system, but I’m struggling to find a real use-case for this outside of the lab.
(*) at my last job we had several 96 core / 396Gb servers that needed to talk to a pair of boxes with only 24 compute cores, but 512gb ram, and 256 gpu’s (a real time render and stream farm in a datacenter)… With those we were bonding 4 fibre channels directly, plus another 2 via a router to get the required bandwidth.
There is a reason why all high end switching gear have serial ports. I know I’ve done it at least once. And many things that are far far worse, but always by accident, never malicious. But I have learned a lot, when things are broken, that is usually when I learn the most.
Bonding is one method to increase throughput, but it not the example I gave above for increasing download throughput from an ISP, while the uplink is fully saturated.
Yep, I’ve been in a datacenter at 3am frantically plugging a serial cable into 50k worth of router, holding my breath and cycling the power.
Most kit these days also has a physically seperate management network, airgapped from the data network. The management console and utilities in this class of machine allow SSH protected access to the console, the bios settings, plus powercycling and environmental sensing. Very sophisticated, very powerful, and very expensive…
I know… I was just keeping it simple, not really the right forum for HS and HA networking tutorials