FLOSS Weekly Episode 794: Release Them All With JReleaser

This week Jonathan Bennett and Katherine Druckman chat with Andres Almiray about JReleaser, the Java release automation tool that’s for more than just Java, and more than just releases. What was the original inspiration for the tool? And how does JReleaser help avoid a string of commits trying to fix GitHub Actions? Listen to find out!

Continue reading “FLOSS Weekly Episode 794: Release Them All With JReleaser”

Programming Ada: Designing A Lock-Free Ring Buffer

Ring buffers are incredibly useful data structures that allow for data to be written and read continuously without having to worry about where the data is being written to or read from. Although they present a continuous (ring) buffer via their API, internally a definitely finite buffer is being maintained. This makes it crucial that at no point in time the reading and writing events can interfere with each other, something which can be guaranteed in a number of ways. Obviously the easiest solution here is to use a mutual exclusion mechanism like a mutex, but this comes with a severe performance penalty.

A lock-free ring buffer (LFRB) accomplishes the same result without something like a mutex (lock), instead using a hardware feature like atomics. In this article we will be looking at how to design an LFRB in Ada, while comparing and contrasting it with the C++-based LFRB that it was ported from. Although similar in some respects, the Ada version involves Ada-specific features such as access types and the rendezvous mechanism with task types (‘threads’).

Continue reading “Programming Ada: Designing A Lock-Free Ring Buffer”

Pnut: A Self-Compiling C Transpiler Targeting Human-Readable POSIX Shell

Shell scripting is one of those skills that are absolutely invaluable on especially UNIX and BSD-based systems like the BSDs, the two zillion Linux distributions as well as MacOS. Yet not every shell is the same, and not everybody can be bothered to learn the differences between the sh, bash, ksh, zsh, dash, fish and other shells, which can make a project like Pnut seem rather tempting. Rather than dealing with shell scripting directly, the user writes their code in the Lingua Franca of computing, AKA C, which is then transpiled into a shell script that should run in any POSIX-compliant shell.

The transpiler can be used both online via the main Pnut website, as well as locally using the (BSD 2-clause) open source code on GitHub. Here the main limitations are also listed, which mostly concern the C constructs that do not map nicely to a POSIX shell. These are: no support for floating point numbers and unsigned integers, no goto and switch nor taking the address of a variable with &. These and preprocessor-related limitations and issues are largely to be expected, as especially POSIX shells are hardly direct replacements for full-blown C code.

As a self-professed research project, Pnut seems like an interesting project, although if you are writing shell scripts for anything important, you probably just want to buckle down and learn the ins and outs of POSIX shell scripting and beyond. Although it’s a bit of a learning curve, we’d be remiss if we said that it’s not totally worth it, if only because it makes overall shell usage even beyond scripting so much better.

Manually Computing Logarithms To Grok Calculators

Logarithms are everywhere in mathematics and derived fields, but we rarely think about how trigonometric functions, exponentials, square roots and others are calculated after we punch the numbers into a calculator of some description and hit ‘calculate’. How do we even know that the answer which it returns is remotely correct? This was the basic question that [Zachary Chartrand] set out to answer for [3Blue1Brown]’s Summer of Math Exposition 3 (SoME-3). Inspired by learning to script Python, he dug into how such calculations are implemented by the scripting language, which naturally led to the standard C library. Here he found an interesting implementation for the natural algorithm and the way geometric series convergence is sped up.

The short answer is that fundamental properties of these series are used to decrease the number of terms and thus calculations required to get a result. One example provided in the article reduces the naïve approach from 36 terms down to 12 with some optimization, while the versions used in the standard C library are even more optimized. This not only reduces the time needed, but also the memory required, both of which makes many types of calculations more feasible on less powerful systems.

Even if most of us are probably more than happy to just keep mashing that ‘calculate’ button and (rightfully) assume that the answer is correct, such a glimpse at the internals of the calculations involved definitely provides a measure of confidence and understanding, if not the utmost appreciation for those who did the hard work to make all of this possible.

Ask Hackaday: Should We Teach BASIC?

Suppose you decide you want to become a novelist. You enroll in the Hackaday Famous Novelists School where your instructor announces that since all truly great novels are written in Russian, our first task will be to learn Russian. You’d probably get up and leave. The truth is, what makes a great (or bad) novel transcends any particular language, and you could make the same argument for programming languages.

Despite the pundits, understanding the basics of how computers work is more important than knowing C, Java, or the language of the week. A recent post by [lackofimagination] proposes that we should teach programming using BASIC. And not a modern whizz-pow BASIC, but old-fashioned regular BASIC as we might have used it in the 1980s.

Certainly, a whole generation of programmers cut their teeth on BASIC. On the other hand, the programming world has changed a lot since then. While you can sort of apply functional and object-oriented techniques to any programming language, it isn’t simple and the details often get in the way of the core ideas.

Still, some things don’t change. The idea of variables, program flow, loops, and arrays all have some parallel in just about anything, so we can see some advantages to starting out simply. After all, you don’t learn to drive by trying it out in the Indy 500, right?

What do you think? If you were teaching programming today, would you start with BASIC? Or with something else? You can modernize a little bit with QB64. Or try EndBasic which just recently had a new release.

Using OpenCV To Catch A Hungry Thief

Rory, the star of the show

[Scott] has a neat little closet in his carport that acts as a shelter and rest area for their outdoor cat, Rory. She has a bed and food and water, so when she’s outside on an adventure she has a place to eat and drink and nap in case her humans aren’t available to let her back in. However, [Scott] recently noticed that they seemed to be going through a lot of food, and they couldn’t figure out where it was going. Kitty wasn’t growing a potbelly, so something else was eating the food.

So [Scott] rolled up his sleeves and hacked together an OpenCV project with a FLIR Boson to try and catch the thief. To reduce the amount of footage to go through, the system would only capture video when it detected movement or a large change in the scene. It would then take snapshots, timestamp them, and optionally record a feed of the video. [Scott] originally started writing the system in Python, but it couldn’t keep up and was causing frames to be dropped when motion was detected. Eventually, he re-wrote the prototype in C++ which of course resulted in much better performance!

Continue reading “Using OpenCV To Catch A Hungry Thief”

A 64-bit X86 Bootloader From Scratch

For most people, you turn on your computer, and it starts the operating system. However, the reality is much more complex as [Thasso] discovered. Even modern x86 chips start in 16-bit real mode and there is a bit of fancy footwork required to shift to modern protected mode with full 64-bit support. Want to see how? [Thasso] shows us the ropes.

Nowadays, it is handy to develop such things because you don’t have to use real hardware. An emulator like QEMU will suffice. If you know assembly language, the process is surprisingly simple, although there is a lot of nuance and subtlety. The biggest task is setting up appropriate paging tables to control the memory mapping. In real mode, segments have access to fixed 64 K blocks of memory unless you use some tricks. But in protected mode, segments define blocks of memory that can be very small or cover the entire address space. These segments define areas of memory even though it is possible to set segments to cover all memory and — sort of — ignore them. You still have to define them for the switch to protected mode.

In the bad old days, you had more reason to worry about this if you were writing a DOS Extender or using some tricks to get access to more memory. But still good to know if you are rolling your own operating system. Why do the processors still boot into real mode? Good question.