10 Gigabit Ethernet For The Pi

When people like Bell and Marconi invented telephones and radios, you have to wonder who they talked to for testing. After all, they had the first device. [Jeff] had a similar problem. He got a 10 gigabit network card working with the Raspberry Pi Compute Module. But he didn’t have any other fast devices to talk to. Simple, right? Just get a router and another network card. [Jeff] thought so too, but as you can see in the video below, it wasn’t quite that easy.

Granted, some — but not all — of the hold-ups were self-inflicted. For example, doing some metalwork to get some gear put in a 19-inch rack. However, some of the problems were unavoidable, such as the router that has 10 Gbps ports, but not enough throughput to actually move traffic at that speed. Recabling was also a big task.

A lot of the work revolved around side issues such as fan noises and adding StarLink to the network that didn’t really contribute to the speed, but we understand distractions in a project.

The router wasn’t the only piece of gear that can’t handle the whole 10 Gbps data stream. The Pi itself has a single 4 Gbps PCI lane, so the most you could possibly get would be 4 Gbps and testing showed that the real limit is not quite 3.6 Gbps. That’s still impressive and the network card offloading helped the PI’s performance, as well.

On a side note, if you ever try to make videos yourself, watching the outtakes at the end of the video will probably make you feel better about your efforts. We all must have the same problems.

If you want to upgrade to 10Gb networking on the cheap, we have some advice. Just be careful not to scrimp on the cables.

29 thoughts on “10 Gigabit Ethernet For The Pi

  1. I too would like to go to 10 Gbe, the trouble is the switch’s (that can actually do it) are very expensive..

    The problem is that when 1Gbe came cheap enough to do at home (long time ago now, ie 20+ years) it was about the speed of a hard disk – so copying over the local network was almost as fast as doing it on your own PC.

    Now even spinning rust (let alone a ssd) is a lot faster than 1Gbe, and 10Gbe is too expensive because of the switch.

    It’s just a real shame that there wasn’t something done between the two of them ie say 5Gbe (which would be able to keep up with a single spinning disk) – which would make NAS and general network stuff way more useable… And it’s getting pretty sad that my external network connection (internet) is now almost the speed of my internal network (at least for downloading)…

    1. Mikrotik sells a 4 port managed switch for about $140, the CRS305. I have one and it’s great, zero complaints.

      Only downside is that it’s SFP-based. 10 gig fiber SFPs are not that expensive ($25 for a pair) and DAC cables aren’t bad either, but if you want to connect them to standard copper those SFPs are super expensive.

      1. I have tested a mikrotik 10G switch (CRS309 iirc) here in the lab but i’ve managed it to crash/lockup several times by trying to just forward less then 10g/s. Nothing special was configured besides a few VLANs and Configuring expert options like bandwidth throttling (use case is a NTU for ISP) will render the device completely useless as all traffic is directed through it CPU (which is totally not upto this task if any).

        The feel of the device is also very cheap and flimsy, also the command line is very un-intuitive.

        Perhaps using it out of the box with all ports set to default in a home setting it will work ok-ish, but IMHO it isn’t worth unboxing.

        1. I have one in my home, i wouldnt use it in an office or professional setting. For home it works great, supports vlans, even supports routing but not close to 10Gbps. There are 2 OS’s that will work SwitchOS and RouterOS, I found SwitchOS to work better but obviously it has less “features”.

          My bet is that your problem was caused by the SFP modules that you were using and their power draw or something like that.

          1. yes, it might be usable but 1) SFP (don’t want to change to that) and 2) 4 ports (unusable even at home).

            Andsome of the problems mentioned above remind me of early cheap 1Gbe – ie they were labelled that but couldn’t do much more than 100Mbe.. Mind you, I still see ones today that struggle to do the full 1Gbe in throughput, including a few nas boxes..

          2. Sfp’s are fine en do work flawless in other equipment, like Cisco or Juniper. Sure it is not the problem.

            The problem is it is too cheap to delivrr the throughput you’d exoect…

      2. Ubiquiti have a switch in beta with a feature list so similar I assume it’s based on the same chipset. On the downside, the device (USW-Flex-XG) is significantly more expensive, and like all their new gear, can only be managed via their app.

        They also have another switch in beta (USW-Enterprise-8-PoE), which has 8 2.5GbE ports and 2 10Gb SFP+ ports, which could be nice for a roomful of new PCs with builtin 2.5GbE.

    2. Sure, I’ll always take faster if I can get it but I am curious what you are doing that 10Gb would be so much more “useable” than 1Gb. I’ve been running with my home directory on a 1Gb NAS both at home and work for years now. Access speed never seems to be a bottleneck for me. I’ve even ran VMs where the virtual disk was on my NAS. The difference between doing that and moving it to the local hard drive is noticeable but not really a big deal.

      1. 2.5 Gbps seems like the next logical step; many motherboard manufacturers are already using that standard in their higher-end motherboards, and you can get sub-$50 USB NICs for it. The big missing piece is cheaper switches and routers—most are still $100+, whereas I could get decent 1 Gbps unmanaged switches for $20-40.

        1. And what was the initial price of SOHO (small office/home office) 1Gbps switch when the first came out, they were definitely more than $100+, and were still considered cheap.

    3. As mentioned in the video, take a look at QNAP. I use their QSW-M408S and it works pretty well and isn’t too spendy. Gives you 4 10GBe ports in the form of SFP+. You’ll of course need to spend quite a bit of additional money for SFP+ to ethernet or fiber adapters. For many people (such as myself), this is sufficient. I can wire my router, NAS, primary rig, and an AP to 10G and use another 1000Mbp switch for everything else.

      1. I wouldn’t trust anything from QNAP, and definitely NOT anything Internet facing. Their software quality is abysmal. Hardly a month goes by that some new zero day isn’t revealed on their product lines – usually an RCE or authentication bypass.

        Once you start looking into 10GBit hardware, don’t bother with anything no-name, nor Ubiquiti (they’re hardware is losing quality and they’re forcing customers to use “cloud accounts” to configure gear), you’ll regret it. You’ll want enterprise/carrier grade equipment because the tolerances have become exponentially tighter on the components over 1Gb. Even consumer grade hardware often can’t handle proper 1Gb throughput over time! You’ll end up spending as much and more going through cheap gear that can’t carry the load till you get to the point where you just buy what you should have to begin with.

    4. Look into Brocade switches on the used market. some have 8+ ports of SFP+(10gbe) 24-48 ports of 1gig rj45 some with POE. Heck some have 2-4ea 40gbe ports…

      All for $100-$200

      PCIE NIC’s can be had for $20 or so used. But thunderbolt adapters are ~$150. A pair of multimode PHY’s is $20, multimode cables are on par with cat6. But cat6e phys will run you $100-200 a pair depending on 30m or 80m range. And the copper phys burn way more power.

      You still need a router/modem to talk to the internet and do DNS, DHCP, NAT, firewall, etc… but most likely you already have one…

      That said, my 3x raid 5 spinning rust tops out at 2200mbps over the network. Iperf did 9900mbps

  2. The RPi4 has PCIe gen 2.0 with x1 lanes
    So 5GT/s raw signalling on the bus, but PCIe version 2.0 uses a 8b/10b line code so for ever 8 bits of data transferred across the bus 10 signalling bits are used. So 4 Gbit/second of data would be the maximum.

    (ref: https://en.wikipedia.org/wiki/PCI_Express_3.0#History_and_revisions )

    Ethernet has packet overheads so the theoretical maximum throughput would be 3.796 Gbit/second so if they are getting 3.6 Gbit/second data throughput then they are close to the maximum possible throughput (~5% less).

    (ref: https://networkengineering.stackexchange.com/questions/19976/trying-to-find-out-exact-tcp-overhead-cost )

    1. Yeah and to be clear, the 3.6 Gbps was only achievable with two optimizations:

      First, enabling PCI Express hierarchy optimization (pci=pcie_bus_perf) in the /boot/cmdline.txt — that allows the PCI Express bus to use 512 byte payloads instead of 128 byte, and makes a significant difference for this use case.

      Second, enabling Jumbo Frames (which I feel like is cheating, since that can’t be done on many networks).

      (ref: https://www.jeffgeerling.com/blog/2021/getting-faster-10-gbps-ethernet-on-raspberry-pi )

      1. Jumbo frames that is different (I assumed 1500 MTU above), then the maximum data throughput would be 3.9656 Gbit/sec (Because you are sending more data per packet the header overhead is smaller relative to the overall size).

        And 3.6Gbit/sec of data would be nearly 9% away from the maximum possible.

  3. so simple, he take a second raspberry with similar 10Gbe interface and connect them by a cross cable. SOLVED.
    Tre real question is: is RP powerfull enough to sustain a 10Gbe bandwith?

        1. YES the card would negotiate 10Gb/sec. The card does not care that you can only ever provide it with at most 4Gb/sec of data because you have a bottleneck in your data flow and are limited to a single PCIe 2.0 x1 lane (5GT/s raw signalling on the bus, which at 8b/10b line code is 4Gbit/sec of data) e.g. https://i.imgur.com/x2v1s5j.png

          You could use two 100 Gbit/sec cards, it is not like they can magically transfer data that you are incapable of providing them with.

      1. I think when you go with surplus or 2nd hand equipment actually skipping 10 Gbe and directly going with 25, 40 or even 100Gbe costs more or less the same. Not much difference in pricing there.

        If you are willing to pay 5x the price for 1Gbe, you can also pay 7x or 8x the price and get 40Gbe.

        Also the change from Copper to Fiber is a single change and you are good to go even with 400Gbe.

        With copper we had to upgrade from cat3 to cat5 while changing 10 to 100Mbit and from 2 pair/4 wire to 4 pair / 8 wire while going from 100M to 1G. Now updating to OM3 or OM4 fiber will enable all the way up to terrabit ethernet.

  4. To be clear, 99% of the things I did were related to the fact that I kept getting annoyed by some little thing, then let that distract me to the next stage of the project.

    I could’ve just stopped once I had a direct connection between the Pi and my Mac with the ThunderBolt 3 NIC, but where’s the fun in that?

    Biggest annoyance is that the 19″ rack I bought wasn’t deep enough to hold the giant UPS I got, but I can’t complain too much, because it was free and had new batteries!

    The UPS still isn’t even showing one LED on the load side, with two switches, two routers, two access points, two NASes and five Raspberry Pis plugged in, though. I’ll have to keep plugging in more things :D

  5. Marconi didn’t invent radio.

    Maxwell predicted the concept of radio waves, Heinrich Hertz proved it. I’ve always seen it expressed as a lab thing, so right there on the bench, there was a crude transmitter and receiver.
    It was a neat trick, nothing more.

    Marconi took the idea out of the lab and showed it could be practical. Not just an experiment, he wanted to exploit it. Hence for a bit, radio was a Marconi product.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.