Embrace IPv6 Before Its Too Late?

Many hackers have familiar sayings in their heads, such as “If it ain’t broke, don’t fix it” and KISS (Keep it simple, stupid). Those of us who have been in the field for some time have habits that are hard to break. When it comes to personal networks, simplicity is key, and the idea of transitioning from IPv4 to IPv6 addresses seems crazy. However, with the increasing number of ‘smart’ devices, streaming media gadgets, and personal phones, finding IPv4 space for our IoT experiments is becoming difficult. Is it time to consider embracing IPv6?

The linked GitHub Gist by [timothyham] summarizes the essential concepts for home network admins to understand before making changes. The first major point is that IPv6 has a vastly larger address space than IPv4, eliminating the need to find spare IPv4 addresses. IPv6 assigns multiple addresses to the same interface. The 128-bit addresses are split into a 64-bit prefix assigned by your ISP and a 64-bit interface identifier. Using SLAAC (Stateless Address Autoconfiguration), clients can manage their own addresses. You don’t have to use SLAAC, but it will make life easier. The suffix typically remains static, allowing integration with a local DNS server.

Continue reading “Embrace IPv6 Before Its Too Late?”

Tunneling TCP By File Server

You want to pass TCP traffic from one computer to another, but there’s a doggone firewall in the way. Can they both see a shared file? Turns out, that’s all you need. Well, that and some software from [fiddyschmitt].

If you think about it, it makes sense. Unix treats most things as a file, so it is pretty easy to listen on a local TCP port and dump the data into a shared file. The other side reads the file and dumps the same data to the desired TCP port on its side. Another file handles data in the other direction. Of course, the details are a bit more than that, but that’s the basic idea.

Performance isn’t going to be wonderful, and the files keep growing until the program detects that they are bigger than 10 megabytes. When that happens, the program purges the file.

The code is written in C# and there are binaries for Windows and Linux on the release page. The examples show using shared files via Windows share and RDP, but we imagine any sort of filesystem that both computers can see would work. Having your traffic stuffed into a shared file is probably not great for security but, you know, you are already jumping a firewall, so…

Of course, no firewall can beat an air gap. Unless you can control the fans or an LED.

So What’s All This HaLow Long-Range WiFi About Then?

We’re all used to wireless networking, but if there’s one thing the ubiquitous WiFi on 2.4 or 5 GHz lacks, it’s range. Inside buildings, it will be stopped in its tracks by anything more than a mediocre wall, and outside, it can be difficult to connect at any useful rate more than a few tens of metres away without resorting to directional antennas and hope. Technologies such as LoRa provide a much longer range at the expense of minuscule bandwidth, but beyond that, there has been little joy. As [Andreas Spiess] points out in a recent video though, this is about to change, as devices using the so-called HaLow or IEEE 802.11ah protocol are starting to edge into the realm of affordability.

Perhaps surprisingly, he finds the 5 GHz variant to be best over a 1km test with a far higher bandwidth. However, we’d say that his use of directional antennas is something of a cheat. Where it does come into its own in his tests, though, is through masonry, with far better penetration across floors of a building. We think that this will translate to better outdoor performance when the line of sight is obstructed.

There’s one more thing he brings to our attention, which seasoned users of LoRA may already be aware of. These lower frequency allocations are different between the USA and Europe, so should you order one for yourself, it would make sense to ensure you have the appropriate model for your continent. Otherwise, we look forward to more HaLow devices appearing and the price falling even further because we think this will lead to some good work in future projects.

Continue reading “So What’s All This HaLow Long-Range WiFi About Then?”

Squeeze Another Drive Into A Full-Up NAS

A network-attached storage (NAS) device is a frequent peripheral in home and office networks alike, yet so often these devices come pre-installed with a proprietary OS which does not lend itself to customization. [Codedbearder] had just such a NAS, a Terramaster F2-221, which while it could be persuaded to run a different OS, couldn’t do so without an external USB hard drive. Their solution was elegant, to create a new backplane PCB which took the same space as the original but managed to shoehorn in a small PCI-E solid-state drive.

The backplane rests in a motherboard connector which resembles a PCI-E one but which carries a pair of SATA interfaces. Some investigation reveals it also had a pair of PCI-E lanes though, so after some detective work to identify the pinout there was the chance of using those. A new PCB was designed, cleverly fitting an M.2 SSD exactly in the space between two pieces of chassis, allowing the boot drive to be incorporated without annoying USB drives. The final version of the board looks for all the world as though it was meant to be there from the start, a truly well-done piece of work.

Of course, if off-the-shelf is too easy for you, you can always build your own NAS.

Homebrew Network Card With No CPU

A modern normal network card will have onboard an Ethernet controller which, of course, is a pre-programmed microcontroller. Not only does it do the things required to keep a computer on the network, it can even save the primary CPU from having to do certain common tasks required for communicating. But not [Ivan’s]. His homebrew computer — comprised of 7 colorful PCBs — now has an eighth card. You guessed it. That card connects to 10BASE-T Ethernet.

There’s not a microcontroller in sight, although there are RAM chips. Everything else is logic gates, flip flops, and counters. There are a few other function chips, but nothing too large. Does it work? Yes. Is it fast? Um…well, no.

The complete computer.

He can ping others on the network with an 85 ms round trip and serve web pages from his homebrew computer at about 2.6 kB/s. But speed wasn’t the goal here and the end result is quite impressive. He even ported a C compiler to his CPU so he could compile uIP, a networking stack, avoiding the problems of writing his own from scratch.

Some compromises had to be made. The host computer has to do things you normally expect a network card to do. The MTU is 1024 bytes (instead of the more common 1500 bytes, but TCP/IP is made to expect different MTU sizes, which used to be more common when more network interfaces looked like this one).

Even on an FPGA, these days, you are more likely to grab some “IP” to do your Ethernet controller. Rolling your own from general logic is amazing, and — honestly — the design is simpler than we would have guessed. If you check out [Ivan]’s blog, you can find articles on the CPU design, its ALU, and even a VGA video card all from discrete logic. The whole design, including the network card is up on GitHub.

We love the idea of building a whole computer system soup to nuts. We wish we had the time. If you need a refresher on what’s really happening with Ethernet, our [Arya Voronova] can help.

How DEC’s LANBridge 100 Gave Ethernet A Fighting Chance

Alan Kirby (left) and Mark Kempf with the LANBridge 100, serial number 0001. (Credit: Alan Kirby)
Alan Kirby (left) and Mark Kempf with the LANBridge 100, serial number 0001. (Credit: Alan Kirby)

When Ethernet was originally envisioned, it would use a common, shared medium (the ‘Ether’ part), with transmitting and collision resolution handled by the carrier sense multiple access with collision detection (CSMA/CD) method. While effective and cheap, this limited Ethernet to a 1.5 km cable run and 10 Mb/s transfer rate. As [Alan Kirby] worked at Digital Equipment Corp. (DEC) in the 1980s and 1990s, he saw how competing network technologies including Fiber Distributed Data Interface (FDDI) – that DEC also worked on – threatened to extinguish Ethernet despite these alternatives being more expensive. The solution here would be store-and-forward switching, [Alan] figured.

After teaming up with Mark Kempf, both engineers managed to convince DEC management to give them a chance to develop such a switch for Ethernet, which turned into the LANBridge 100. As a so-called ‘learning bridge’, it operated on Layer 2 of the network stack, learning the MAC addresses of the connected systems and forwarding only those packets that were relevant for the other network. This instantly prevented collisions between thus connected networks, allowed for long (fiber) runs between bridges and would be the beginning of the transformation of Ethernet as a shared medium (like WiFi today) into a star topology network, with each connected system getting its very own Ethernet cable to a dedicated switch port.

On Cloud Computing And Learning To Say No

Do you really need that cloud hosting package? If you’re just running a website — no matter whether large or very large — you probably don’t and should settle for basic hosting. This is the point that [Thomas Millar] argues, taking the reader through an example of a big site like Business Insider, and their realistic bandwidth needs.

From a few stories on Business Insider the HTML itself comes down to about 75 kB compressed, so for their approximately 200 million visitors a month they’d churn through 30 TB of bandwidth for the HTML assuming two articles read per visitor.

This comes down to 11 MB/s of HTML, which can be generated dynamically even with slow interpreted languages, or as [Thomas] says would allow for the world’s websites to be hosted on a system featuring single 192 core AMD Zen 5-based server CPU. So what’s the added value here? The reduction in latency and of course increased redundancy from having the site served from 2-3 locations around the globe. Rather than falling in the trap of ‘edge cloud hosting’ and the latency of inter-datacenter calls, databases should be ideally located on the same physical hardware and synchronized between datacenters.

In this scenario [Thomas] also sees no need for Docker, scaling solutions and virtualization, massively cutting down on costs and complexity. For those among us who run large websites (in the cloud or not), do you agree or disagree with this notion? Feel free to touch off in the comments.