Not On The Internet

Whenever you need to know something, you just look it up on the Internet, right? Using the search engine of your choice, you type in a couple keywords, hit enter, and you’re set. Any datasheet, any protocol specification, any obscure runtime error, any time. Heck, you can most often find some sample code implementing whatever it is you’re looking for. In a minute or so.

It is so truly easy to find everything technical that I take it entirely for granted. In fact, I had entirely forgotten that we live in a hacker’s utopia until a couple nights ago, when it happened again: I wanted to find something that isn’t on the Internet. Now, to be fair, it’s probably out there and I just need to dig a little deeper, but the shock of not instantly finding the answer to a random esoteric question reminded me how lucky we actually are 99.99% of the time when we do find the answer straight away.

So great job, global hive-mind of über-nerds! This was one of the founding dreams of the Internet, that all information would be available to everyone anywhere, and it’s essentially working. Never mind that we can stream movies or have telcos with people on the other side of the globe – when I want a Python library for decoding Kansas City Standard audio data, it’s at my fingertips. Detailed SCSI specifications? Check.

But what was my search, you ask? Kristina and I were talking about Teddy Ruxpin, and I thought that the specification for the servo track on the tape would certainly have been reverse engineered and well documented. And I’m still sure it is – I was just shocked that I couldn’t instantly find it. The last time this happened to me, it was the datasheet for the chips that make up a Speak & Spell, and it turned out that I just needed to dig a lot harder. So I haven’t given up hope yet.

And deep down, I’m a little bit happy that I found a hole in the Internet. It gives Kristina and me an excuse to reverse engineer the format ourselves. Sometimes ignorance is bliss. But for the rest of those times, when I really want the answer to a niche tech question, thanks everyone!

Building Faster Rsync From Scratch In Go

For a quick file transfer between two computers, SCP is a fine program to use. For more complex, large, or regular backups, however, the go-to tool is rsync. It’s faster, more efficient, and usable in a wider range of circumstances. For all its perks, [Michael Stapelberg] felt that it had one major weakness: it is a tool written in C. [Michael] is philosophically opposed to programs written in C, so he set out to implement rsync from scratch in Go instead.

[Michael]’s path to deciding to tackle this project is a complicated one. His ISP upgraded his internet connection to 25 Gbit/s recently, which means that his custom router was the bottleneck in his network. To solve that problem he migrated his router to a PC with several 25 Gbit/s network cards. To take full advantage of the speed now theoretically available, he began using a tool called gokrazy, which turns applications written in Go into their own appliance. That means that instead of installing a full Linux distribution to handle specific tasks (like a router, for example), the only thing loaded on the computer is essentially the Linux kernel, the Go compiler and libraries, and then the Go application itself.

With a new router with hardware capable of supporting these fast speeds and only running software written in Go, the last step was finally to build rsync to support his tasks on his network. This meant that rsync itself needed to be built from scratch in Go. Once [Michael] completed this final task, he found that his implementation of rsync is actually much faster than the version built in C, thanks to the modernization found in the Go language and the fact that his router isn’t running all of the cruft associated with a standard Linux distribution.

For a software project of this scope, we find [Michael]’s step-by-step process worth taking note of for any problem any of us attempt to tackle. Not only that, refactoring a foundational tool like rsync is an involved task on its own, let alone its creation simply to increase network speeds beyond what most of us would already consider blazingly fast. We’re leaving out a ton of details on this build so we definitely recommend checking out his talk in the video below.

Thanks to [sarinkhan] for the tip!

Continue reading “Building Faster Rsync From Scratch In Go”

Network Time Protocol On The ESP32

Network Time Protocol (NTP) is one of the best ways to keep networked computers synchronized to the same time. It’s simple, lightweight, and not only allows computers to maintain a time standard together, but it also allows some computer manufacturers to save some money on hardware costs. The Raspberry Pi is perhaps the most well-known example of a low-cost computer without the extra expense of a real-time clock (RTC). While the Pi sets up NTP essentially automatically, other microcontrollers like the ESP32 don’t, but it is possible to configure them to use this time standard with some work.

For this project the MicroPython implementation for the ESP32 is required. MicroPython is a way of running Python code on microcontrollers or other embedded systems without all of the overhead that Python would normally require. Luckily enough, the NTP libraries are built right in so once MicroPython is running on the ESP32 it’s nearly as easy as calling the library. Of course you will have to make sure there is an internet connection, and then grab the time, sync it to the machine, and then set the timezone.

For a bonus exercise, the project’s creator [Bhavesh] suggests attempting to configure Daylight Savings Time, although this can be a surprisingly difficult problem to solve. In the meantime, there are a few other ways of installing a clock on a microcontroller like this one. An RTC module is an obvious choice, but you can also get incredibly accurate time by using a GPS module as well.

German Experiment Shows Horses Beating Local Internet Connections

These days, we’re blessed with wired and wireless networks that can carry huge amounts of data in the blink of an eye. However, some areas are underprovisioned with bandwidth, such as Schmallenberg-Oberkirchen in Germany. There, reporters ran a test last December to see which would be faster: the Internet, or a horse?

The long and the short of it is that Germany faces issues with disparate Internet speeds across the country. Some areas are well-served by high-speed fiber services. However, others deemed less important by the free market struggle on with ancient copper phone lines and subsequently, experience lower speeds.

Thus, the experiment kicked off from the house of photographer [Klaus-Peter Kappest], who started an Internet transfer of 4.5GB of photos over the Internet. At the same time, a DVD was handed to messengers riding on horseback to the destination 10 kilometers away. The horses won the day, making the journey in about an hour, while the transfer over [Kappest’s] copper connection was still crawling along, only 61% complete.

Obviously, it’s a test that can be gamed quite easily. The Internet connection would have easily won over a greater distance, of course. Similarly, we’ve all heard the quote from [Andrew Tanenbaum]: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

Notably, [Kappest’s] home actually had a fiber line sitting in the basement, but bureaucracy had stymied any attempts of his to get it connected. The stunt thus also served as a great way to draw attention to his plight, and that of others in Germany suffering with similar issues in this digital age.

Top speeds for data transfer continue to rise; an Australian research team set a record last year of 44.2 terabits per second. Naturally, the hard part is getting that technology rolled out across a country. Sound off below with the problems you’ve faced getting a solid connection to your home or office.

Just How Did 1500 Bytes Become The MTU Of The Internet?

[Benjojo] got interested in where the magic number of 1,500 bytes came from, and shared some background on just how and why it seems to have come to be. In a nutshell, the maximum transmission unit (MTU) limits the maximum amount of data that can be transmitted in a single network-layer transaction, but 1,500 is kind of a strange number in binary. For the average Internet user, this under the hood stuff doesn’t really affect one’s ability to send data, but it has an impact from a network management point of view. Just where did this number come from, and why does it matter?

[Benjojo] looks at a year’s worth of data from a major Internet traffic exchange and shows, with the help of several graphs, that being stuck with a 1,500 byte MTU upper limit has real impact on modern network efficiency and bandwidth usage, because bandwidth spent on packet headers adds up rapidly when roughly 20% of all packets are topping out the 1,500 byte limit. Naturally, solutions exist to improve this situation, but elegant and effective solutions to the Internet’s legacy problems tend to require instant buy-in and cooperation from everyone at once, meaning they end up going in the general direction of nowhere.

So where did 1,500 bytes come from? It appears that it is a legacy value originally derived from a combination of hardware limits and a need to choose a value that would play well on shared network segments, without causing too much transmission latency when busy and not bringing too much header overhead. But the picture is not entirely complete, and [Benjojo] asks that if you have any additional knowledge or insight about the 1,500 bytes decision, please share it because manuals, mailing list archives, and other context from that time is either disappearing fast or already entirely gone.

Knowledge fading from record and memory is absolutely a thing that happens, but occasionally things get saved instead of vanishing into the shadows. That’s how we got IGNITION! An Informal History of Liquid Rocket Propellants, which contains knowledge and history that would otherwise have simply disappeared.

Can You Code Without Google?

Imagine for a moment that something has taken out your phone line, cell, and fibre connection so you have no internet. For some of you this may even be reality, but go with it and imagine yourself deciding to use your unexpectedly disconnected lockdown time pursuing that code project you always promised yourself. You pull out your laptop and fire up a code editor. Can you write code that works, without the Internet as a handy crib sheet? [Austin Z. Henley] couldn’t, when he tried writing a straightforward web app. He uses it as a hook to muse on the nature of learning, and it’s certainly a thought-provoking subject.

It has become an indispensable tool for the engineer and the coder alike, to constantly refer to online knowledge. This makes absolute sense, as it provides a reference library that will be many orders of magnitude in excess of anything an individual can possibly hold personally.

This holds true whether the resource takes the form of code snippets from StackOverflow or GitHub, or data sheets from TI or Microchip. Even our calculations have moved online, as it’s often much quicker to use an online calculator on a web page to derive for example an impedance calculation. This is not necessarily a bad thing, instead it’s an enabler; skills that used to take months to master due to slow information access can now be acquired in an afternoon. But it does pose the interesting question, in the Internet age what is the measure of an expert coder? Is it the ability to produce the code effectively with whatever help is available, or is it a guru-like mastery of the code? Maybe it’s both. If you have the Internet, give us your views in the comments.

Displaying Incoming Server Attacks By Giving Server Logs A Scoreboard

In the server world, it’s a foregone conclusion that ports shouldn’t be exposed to the greater Internet if they don’t need to be. There are malicious bots everywhere that will try and randomly access anything connected to a network, and it’s best just to shut them off completely. If you have to have a port open, like 22 for SSH, it’ll need to be secured properly and monitored so that the administrator can keep track of it. Usually this is done in a system log and put to the side, but [Nick] wanted a more up-front reminder of just how many attempts were being made to log into his systems.

This build actively monitors attempts to log into his server on port 22 and notifies him via a numerical display and series of LEDs. It’s based on a Raspberry Pi Zero W housed in a 3D-printed case, and works by interfacing with a program called fail2ban running on the server. fail2ban‘s primary job is to block IP addresses that fail a certain number of login attempts on a server, but being FOSS it can be modified for situations like this. With some Python code running on the Pi, it is able to gather data fed to it from fail2ban and display it.

[Nick] was able to see immediate results too. Within 24 hours he saw 1633 login attempts on a server with normal login enabled, which was promptly shown on the display. A video of the counter in action is linked below. You don’t always need a secondary display if you need real-time information on your server, though. This Pi server has its own display built right in to its case.

Continue reading “Displaying Incoming Server Attacks By Giving Server Logs A Scoreboard”