Building Faster Rsync From Scratch In Go

For a quick file transfer between two computers, SCP is a fine program to use. For more complex, large, or regular backups, however, the go-to tool is rsync. It’s faster, more efficient, and usable in a wider range of circumstances. For all its perks, [Michael Stapelberg] felt that it had one major weakness: it is a tool written in C. [Michael] is philosophically opposed to programs written in C, so he set out to implement rsync from scratch in Go instead.

[Michael]’s path to deciding to tackle this project is a complicated one. His ISP upgraded his internet connection to 25 Gbit/s recently, which means that his custom router was the bottleneck in his network. To solve that problem he migrated his router to a PC with several 25 Gbit/s network cards. To take full advantage of the speed now theoretically available, he began using a tool called gokrazy, which turns applications written in Go into their own appliance. That means that instead of installing a full Linux distribution to handle specific tasks (like a router, for example), the only thing loaded on the computer is essentially the Linux kernel, the Go compiler and libraries, and then the Go application itself.

With a new router with hardware capable of supporting these fast speeds and only running software written in Go, the last step was finally to build rsync to support his tasks on his network. This meant that rsync itself needed to be built from scratch in Go. Once [Michael] completed this final task, he found that his implementation of rsync is actually much faster than the version built in C, thanks to the modernization found in the Go language and the fact that his router isn’t running all of the cruft associated with a standard Linux distribution.

For a software project of this scope, we find [Michael]’s step-by-step process worth taking note of for any problem any of us attempt to tackle. Not only that, refactoring a foundational tool like rsync is an involved task on its own, let alone its creation simply to increase network speeds beyond what most of us would already consider blazingly fast. We’re leaving out a ton of details on this build so we definitely recommend checking out his talk in the video below.

Thanks to [sarinkhan] for the tip!

Continue reading “Building Faster Rsync From Scratch In Go”

AI Attempts Converting Python Code To C++

[Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. It’s not really intended to create robust code conversions, but as far as experiments go, it’s pretty neat.

The program works by reading a Python script as an input file, setting up a few parameters, then making a request to OpenAI’s Codex API for the conversion. It then attempts to compile the result. If compilation is successful, then hopefully the resulting executable actually works the same way the input file did. If not? Well, learning is fun, too. If you give it a shot, maybe start simple and don’t throw it too many curveballs.

Codex is an interesting idea, and this isn’t the first experiment we’ve seen that plays with the concept of using machine learning in this way. We’ve seen a project that generates Linux commands based on a verbal description, and our own [Maya Posch] took a close look at GitHub Copilot, a project high on promise and concept, but — at least at the time — considerably less so when it came to actual practicality or usefulness.

Things Are Getting Rusty In Kernel Land

There is gathering momentum around the idea of adding Rust to the Linux kernel. Why exactly is that a big deal, and what does this mean for the rest of us? The Linux kernel has been just C and assembly for its entire lifetime. A big project like the kernel has a great deal of shared tooling around making its languages work, so adding another one is quite an undertaking. There’s also the project culture developed around the language choice. So why exactly are the grey-beards of kernel development even entertaining the idea of adding Rust? To answer in a single line, it’s because C was designed in 1971, to run on the minicomputers at Bell Labs. If you want to shoot yourself in the foot, C will hand you the loaded firearm.

On the other hand, if you want to write a kernel, C is a great language for doing low-level coding. Direct memory access? Yep. Inline assembly? Sure. Runs directly on the metal, with no garbage collection or virtual machines in the way? Absolutely. But all the things that make C great for kernel programming also make C dangerous for kernel programming.

Now I hear your collective keyboards clacking in consternation: “It’s possible to write safe C code!” Yes, yes it is possible. It’s just very easy to mess up, and when you mess up in a kernel, you have security vulnerabilities. There’s also some things that are objectively terrible about C, like undefined behavior. C compilers do their best to do the right thing with cursed code like i++ + i++; or a[i] = i++;. But that’s almost certainly not going to do what you want it to, and even worse, it may sometimes do the right thing.

Rust seems to be gaining popularity. There are some ambitious projects out there, like rewriting coreutils in Rust. Many other standard applications are getting a Rust rewrite. It’s fairly inevitable that the collection of Rust developers started to ask, could we invade the kernel next? This was pitched for a Linux Plumbers Conference, and the mailing list response was cautiously optimistic. If Rust could be added without breaking things, and without losing the very things that makes Rust useful, then yes it would be interesting. Continue reading “Things Are Getting Rusty In Kernel Land”

An ATTiny board that one of the students developed for this project, etched on single-sided FR4.

Electronics And C++ Education With An ATTiny13

When [Adam, HA8KDA] is not busy with his PhD studies, he mentors a group of students interested in engineering. To teach them a wide range of topics, he set out to build a small and entertaining embedded project as they watch and participate along the way. With this LED-adorned ATTiny13A project, [Adam] demonstrated schematic and PCB design, then taught C++ basics and intricacies – especially when it comes to building low-footprint software – and tied it all together into a real-world device students could take home after the project. His course went way beyond the “Hello world”s we typically expect, and some of us can only wish for a university experience like this.

He shares the PCB files and software with us, but also talks about the C++20 framework he’s developed for this ATTiny. The ATTiny13A is very cheap, and also very limited – you get 1K of ROM and 64 bytes of RAM. This framework lets you make good use of it, providing the basics like GPIO wiggling, but also things like low-power operation hooks, soft PWM with optional multi-phase operation support and EEPROM access. Students could write their own animations for this device, and he includes them in the repo, too!

In educational projects, it pays to keep code direct and clean, cruft-less and accessible to students. These are the things you can only achieve when you truly understand the tools you’re working with, which is the perfect position for teaching about them! [Adam] intends to show that C++ is more than suitable for low-resource devices, and tells us about the EEPROM class code he wrote – compiling into the same amount of instructions as an Assembly implementation and consuming the same amount of RAM, while providing compile-time checks and fail-safe syntax.

We’ve talked about using C++ on microcontrollers before, getting extra compile-time features without overhead, and this project illustrates the concept well. [Adam] asks us all, and especially our fellow C++ wizards, for our opinions on the framework he designed. Could you achieve even more with this simple hardware – make the code more robust, clean, have it do more within the limited resources?

What could you build with an ATTiny13, especially with such a framework? A flashy hairclip wearable, perhaps, or a code-learning RF-remote-controlled outlet. We’ve also seen a tiny camera trigger for endurance races,, a handheld Flappy Bird-like console, and many more!

Free Your Pi With This Bare Metal Programming Environment

[Rene Strange] has graced these fair pages a short while ago with a sweet Raspberry Pi software based poly synth, with a tantalising reference to it being a bare metal application. So now, we’ll look into circle, the bare metal programming environment that it is based upon. The platform consists of a large set of C++ classes to access the hardware as well as perform tasks such as task creation and scheduling in the cooperative multitasking, multicore environment. Supporting all Raspberry Pi boards from version 2 onwards (not including the Pico!) in both 32-bit and 64-bit flavours, the environment is pretty complete. Classes are provided for USB, networking, FatFS, as well as more mundane tasks such as dealing with interrupts. On top of these classes there are a pile of application-specific libraries, covering functions such as display interfacing, GUIs using a variety of frameworks, and some more esoteric applications such as interfacing to a Pico, and even sending the system log to a remote web browser!

Classes and libraries however, don’t always help by themselves, which is where the 42 (yes, we know) code examples come in very handy. They’ve provided example applications for some fun stuff like drawing Mandelbrot fractals to the display, as well as some more mundane tasks that we have to deal with such as getting that pesky DMA controller to play nice with the SPI hardware. All-in-all, this looks like a great set of tools for taking full advantage of some fairly beefy hardware for your next embedded project that needs plenty of resources, but not all that unnecessary operating system stuff.

Perhaps not quite as complete as circle, but we’ve seen a fair few Raspberry Pi Bare metal projects over the years, like the Nerdsynth, based on the PiZero, and this neat little bare metal assembly language clone of starfox.

Thanks [Ruhan] for the tip!

Header: Aryan Patidar, CC BY 4.0/Evan-Amos, Public domain.

How Did We Get To The Speed Of Light?

Every high school physics student knows c, or the speed of light, it’s 3 x 10^8 metres per second. More advanced or more curious students will know that this is an approximation, and the figure of 299,792,458 metres per second that forms the officially accepted figure comes from a resonance of the caesium atom from which is derived a value for the second.

Galileo
Galileo Galilei, whose presence in this story should come as no surprise. Justus Sustermans, Public domain.

But for those who are really curious about measuring the speed of light the question remains: Just how did we arrive at that figure and how long have we been measuring it? The answer contains some surprises, and some exceptionally clever scientific thought and experimentation over the centuries.

The nature of light and whether it had a speed at all had been puzzling philosophers and scientists since antiquity, but the first experiments performed in an attempt to measure it were you will not be surprised to hear, performed by Galileo sometime in the early 17th century. His experiment involved his observation of assistants uncovering lanterns at known distances away, and his observations  failed to arrive at a figure.

Later that century in 1676 the first numerical estimate of the speed of light was made by the Danish astronomer Ole Rømer, who observed an apparent variation in the period of one of Jupiter’s moons depending upon whether the Earth was approaching it or moving away from it. From this he was able to estimate the time taken for light to cross the Earth’s orbit, and from there the mathematician Christiaan Huygens was able to produce a figure of 220,000,000 metres per second.

Spinning Cogs And Mirrors: Time Of Flight

The mile-long evacuated tube used in Michelson's time-of-flight experiment. H.
The mile-long evacuated tube used in Michelson’s time-of-flight experiment. H. H. Dunn, Public domain.

The experiments with which we will perhaps be the most familiar are the so-called time of flight measurements, which take Galileo’s idea of observing the delay as light travels over a distance, and bring to it ever higher precision. This was first performed in the middle of the 19th century by the French physicist Hippolyte Fizeau, who reflected a beam of light from a mirror over several kilometres, and used a toothed wheel to chop it into pulses. The pulses could be increased in frequency by moving the wheel faster until the time taken for the light to travel the distance from wheel to mirror and back again matched the separation between teeth and the returning pulse could be observed. His calculation of 313,300,000 metres per second was successively improved upon through the work of succession of others including Léon Foucault, culminating in the series of experiments by the American physicist Albert A. Michelson in the 1920s. Michelson’s final figure stood at 299,774,000 metres per second, measured through a multi-path traversal of a mile-long evacuated tube in the California desert. In the second half of the century the techniques shifted to laser interferometry, and in the quest to define the SI units in terms of constants, eventually to the definition mentioned in the first paragraph.

The most fascinating part of the story probably encapsulates the essence of scientific discovery, namely that while to arrive at something takes the work of many scientists building on the work of each other, it can then often be rendered into a form that can be understood by a student who hasn’t had to pass through all that effort. We could replicate Fizeau and Michelson’s experiments with a pulse generator, laser diode, and oscilloscope, which while of little scientific value nearly a century after Michelson’s evacuated tube, is still immensely cool. Has anyone out there given it a try?

Header image: Tommology, CC BY-SA 4.0.

Linux Fu: Fusing Hackaday

Unix and, by extension, Linux, has a mantra to make everything possible look like a file. Files, of course, look like files. But also devices, network sockets, and even system information show up as things that appear to be files. There are plenty of advantages to doing that since you can use all the nice tools like grep and find to work with files. However, making your own programs expose a filesystem can be hard. Filesystem code traditionally works at the kernel module level, where mistakes can wipe out lots of things and debugging is difficult. However, there is FUSE — the file system in user space library — that allows you to write more or less ordinary code and expose anything you want as a file system. You’ve probably seen FUSE used to mount, say, remote drives via ssh or Dropbox. We’ve even looked at FUSE before, even for Windows.

What’s missing, naturally, is the Hackaday RSS feed, mountable as a normal file. And that’s what we’re building today.

Writing a FUSE filesystem isn’t that hard, but there are a lot of tedious jobs. You essentially have to provide callbacks that FUSE uses to do things when the operating system asks for them. Open a file, read a file, list a directory, etc. The problem is that for some simple projects, you don’t care about half of these things, but you still have to provide them.

Luckily, there are libraries that can make it a lot easier. I’m going to show you a simple C++ program that can mount your favorite RSS feed (assuming your favorite one is Hackaday, of course) as a file system. Granted, that’s not amazing, but it is kind of neat to be able to grep through the front page stories from the command line or view the last few articles using Dolphin. Continue reading “Linux Fu: Fusing Hackaday”