Optimizing Linux Pipes

In CPU design, there is Ahmdal’s law. Simply put, it means that if some process is contributing to 10% of your execution, optimizing it can’t improve things by more than 10%. Common sense, really, but it illustrates the importance of knowing how fast or slow various parts of your system are. So how fast are Linux pipes? That’s a good question and one that [Mazzo] sets out to answer.

The inspiration was a highly-optimized fizzbuzz program that clocked in at over 36GB/s on his laptop. Is that a common speed? Nope. A simple program using pipes on the same machine turned in not quite 4 GB/s. What accounts for the difference?

Continue reading “Optimizing Linux Pipes”

Linux Fu: Easy Widgets

Here’s a scenario. You have a microcontroller that reads a number of items — temperatures, pressures, whatever — and you want to have a display for your Linux desktop that sits on the panel and shows you the status. If you click on it, you get expanded status and can even issue some commands. Most desktops support the notion of widgets, but developing them is a real pain, right? And even if you develop one for KDE, what about the people using Gnome?

Turns out there is an easy answer and it was apparently inspired by, of all things, a tool from the Mac world. That tool was called BitBar (now XBar). That program places a widget on your menu bar that can display anything you want. You can write any kind of program you like — shell script, C, whatever. The output printed from the program controls what appears on the widget using a simple markup-like language.

That’s fine for the Mac, but what about Linux? If you use Gnome, there is a very similar project called Argos. It is largely compatible with XBar, although there are a few things that it adds that are specific to it. If you use KDE (like I do) then you’ll want Kargos, which is more or less a port of Argos and adds a few things of its own.

Good News, Bad News

The good news is that, in theory, you could write a script that would run under all three systems. The bad news is that each has its own differences and quirks. Obviously, too, if you use a complied program that could pose a problem on the Mac unless you recompile.

Continue reading “Linux Fu: Easy Widgets”

Building Faster Rsync From Scratch In Go

For a quick file transfer between two computers, SCP is a fine program to use. For more complex, large, or regular backups, however, the go-to tool is rsync. It’s faster, more efficient, and usable in a wider range of circumstances. For all its perks, [Michael Stapelberg] felt that it had one major weakness: it is a tool written in C. [Michael] is philosophically opposed to programs written in C, so he set out to implement rsync from scratch in Go instead.

[Michael]’s path to deciding to tackle this project is a complicated one. His ISP upgraded his internet connection to 25 Gbit/s recently, which means that his custom router was the bottleneck in his network. To solve that problem he migrated his router to a PC with several 25 Gbit/s network cards. To take full advantage of the speed now theoretically available, he began using a tool called gokrazy, which turns applications written in Go into their own appliance. That means that instead of installing a full Linux distribution to handle specific tasks (like a router, for example), the only thing loaded on the computer is essentially the Linux kernel, the Go compiler and libraries, and then the Go application itself.

With a new router with hardware capable of supporting these fast speeds and only running software written in Go, the last step was finally to build rsync to support his tasks on his network. This meant that rsync itself needed to be built from scratch in Go. Once [Michael] completed this final task, he found that his implementation of rsync is actually much faster than the version built in C, thanks to the modernization found in the Go language and the fact that his router isn’t running all of the cruft associated with a standard Linux distribution.

For a software project of this scope, we find [Michael]’s step-by-step process worth taking note of for any problem any of us attempt to tackle. Not only that, refactoring a foundational tool like rsync is an involved task on its own, let alone its creation simply to increase network speeds beyond what most of us would already consider blazingly fast. We’re leaving out a ton of details on this build so we definitely recommend checking out his talk in the video below.

Thanks to [sarinkhan] for the tip!

Continue reading “Building Faster Rsync From Scratch In Go”

Annotate PDFs On Linux With PDFrankenstein

On Windows and Mac machines, it’s not too troublesome to add text or drawings (such as signatures) to PDF files, but [Mansour Behabadi] found that on Linux machines, there didn’t seem to be a satisfying way or a simple tool. Being an enterprising hacker, [Mansour] set out to fill that gap, and the way it works under the hood is delightfully hacky, indeed.

The main thing standing in the way of creating such a tool is that the PDF format is a complex and twisty thing. Making a general-purpose PDF editing tool capable of inserting hyperlinks, notes, images, or drawings isn’t exactly a weekend project. But [Mansour] didn’t let that stop him; he leveraged the fact that tools already exist on Linux that can read and create PDF files, and tied them all together into what was at one point “a horrific patchwork of tools” which inspired the name pdfrankenstein.

The tool is a GUI that uses Inkscape and qpdf to convert a PDF page to an SVG file, set it as a locked background, then let the user add any annotations they desire, using Inkscape as the editor. After changes are made, the program removes the background, overlays the annotations back onto the originals, and exports a final file. Annotations can therefore be anything that can be done in Inkscape.

Curious about these and other tools for handling PDFs? We’ve shared some programs and tricks when we previously covered dealing with the PDF format in Linux.

Linux And C In The Browser

There was a time when trying to learn to write low-level driver or kernel code was hard. You really needed two machines: one to work with, and one to screw up over and over again until you got it right. These days you can just spin up a virtual machine and roll it back every time you totally screw up. Much easier! We don’t think it is all that practical, but [nsommer] has an interesting post about loading up a C compiler and compiling Linux for a virtual machine. What’s different? Oh, the virtual machine is in your browser.

The v86 CPU emulator runs in the browser and looks like a Pentium III computer with the usual hardware. You might think it is slow and it certainly isn’t going to be fast as a rocket, but it does translate machine code into WebAssembly, so performance isn’t as bad as you might think.

The post goes into detail about how to build and create a simple machine web page that hosts v86. Once you cross-compile the kernel you can boot the machine up virtually. The other interesting part is the addition of tcc which is a pretty capable C compiler and much smaller and faster than the very traditional gcc.

The tcc build is tricky because the normal build process compiles the compiler and then uses the same compiler to build the default libraries. When cross-compiling, this doesn’t work well because the library you want for the host compile is different from the library you want to target for the second pass. You’ll see how to work around that in the post. The post continues to show how to do remote debugging and even gets QEMU into the mix. Debugging inside v86 doesn’t seem to work so far. There are more posts on this topic promised.

Honestly, this is one of those things like teaching a chicken to play checkers. It can be done, there’s little practical value, but it is still something to see. On the other hand, if you spend the weekend working through this, your next Linux porting project ought to seem easy by comparison.

Amazing what you can pull off with WebAssembly. If you need a quick introduction, check this one out from [Ben James].

Lotus 123 For Linux Is Like A Digital Treasure Hunt

Ever hear of Lotus 123? It is an old spreadsheet program that dominated the early PC market, taking the crown from incumbent Visicalc. [Tavis Ormandy] has managed to get the old software running natively under Linux — quite a feat for software that is around 40 years old and was meant for a different operating system. You can see the results in glorious green text on a black screen in the video below.

If you are a recent convert to Linux, you might not remember what a pain it was “in the old days” to install software. But in this case, it is even worse since the software isn’t even for Linux. The whole adventure started with [Tavis] wanting to find the API kit used to add plugins to Lotus. In theory, you could use it to add modern features to the venerable spreadsheet program.

The $395 software development kit wasn’t very common and there was also a Unix version of Lotus 123, but no one seemed to have a copy of that. [Tavis] eventually found someone who ran a circa-1990 BBS and had the data on tape. Turned out there was a hot copy of the SDK that he was able to use. But he noticed something else in the BBS’s list of files: the long-lost Unix version of Lotus!

An investigation found the installer used TD0 files which took some research. Luckily, a utility exists that can convert these to raw disk images. Inside was a very large object file. Apparently, in the days without dynamic loading, that object would be linked with plug in modules to install them.

The object file had all of its debugging information intact which shed a lot of light on the program’s internal operations. The old executables used COFF format but it is possible to relink it to an ELF file. Of course, it isn’t just that easy. [Tavis] wrote a small program to remove the old-style Unix system calls so they could be rerouted to Linux system calls. Some calls just pass through, but others need some translation due to differences in things like structure layout, sizes, and alignment.

In the end, it all worked but didn’t have a valid license. However, [Tavis] felt like since he did have a license and the software is abandoned, he was within his rights to crack the license check.

We are well-known abusers of spreadsheets around here. Of course, we aren’t the only ones.

Continue reading Lotus 123 For Linux Is Like A Digital Treasure Hunt”

Things Are Getting Rusty In Kernel Land

There is gathering momentum around the idea of adding Rust to the Linux kernel. Why exactly is that a big deal, and what does this mean for the rest of us? The Linux kernel has been just C and assembly for its entire lifetime. A big project like the kernel has a great deal of shared tooling around making its languages work, so adding another one is quite an undertaking. There’s also the project culture developed around the language choice. So why exactly are the grey-beards of kernel development even entertaining the idea of adding Rust? To answer in a single line, it’s because C was designed in 1971, to run on the minicomputers at Bell Labs. If you want to shoot yourself in the foot, C will hand you the loaded firearm.

On the other hand, if you want to write a kernel, C is a great language for doing low-level coding. Direct memory access? Yep. Inline assembly? Sure. Runs directly on the metal, with no garbage collection or virtual machines in the way? Absolutely. But all the things that make C great for kernel programming also make C dangerous for kernel programming.

Now I hear your collective keyboards clacking in consternation: “It’s possible to write safe C code!” Yes, yes it is possible. It’s just very easy to mess up, and when you mess up in a kernel, you have security vulnerabilities. There’s also some things that are objectively terrible about C, like undefined behavior. C compilers do their best to do the right thing with cursed code like i++ + i++; or a[i] = i++;. But that’s almost certainly not going to do what you want it to, and even worse, it may sometimes do the right thing.

Rust seems to be gaining popularity. There are some ambitious projects out there, like rewriting coreutils in Rust. Many other standard applications are getting a Rust rewrite. It’s fairly inevitable that the collection of Rust developers started to ask, could we invade the kernel next? This was pitched for a Linux Plumbers Conference, and the mailing list response was cautiously optimistic. If Rust could be added without breaking things, and without losing the very things that makes Rust useful, then yes it would be interesting. Continue reading “Things Are Getting Rusty In Kernel Land”