At Last! Faster OpenSCAD Rendering Is On The Horizon

Known as “The Programmers Solid 3D CAD Modeller”, OpenSCAD is used by many people for whom writing code comes more naturally than learning a fiddly user interface. It’s a very capable piece of software, but regular users will tell you that it can be rather slow when it comes to rendering your work. We’re very pleased to see that a fix for this has been produced courtesy of [@ochafik], can now be found as an experimental feature in nightly builds, and will in due course no doubt find its way to official releases.

Despite a modern computer invariably having a multi-core architecture, it might surprise you to find that OpenSCAD wasn’t able to take advantage of this previously. The above-linked thread spans over a decade of experimenting and contains some fascinating discussions if you’re prepared to wade through it, and culminates a few weeks ago in the announcement of the new feature giving access to multiple CPUs. We don’t have it yet, but it’s great to know it’s in the works and we’re looking forward to render time involving considerably less of a wait.

So many OpenSCAD projects have passed through these pages over the years, it’s safe to say that it has a significant user base among Hackaday readers. It’s still something an AI hasn’t mastered yet though.

Thanks [pca006132] for the tip.

RP2040 picture on left by Phiarc, CC BY-SA 4.0, via Wikimedia

Kaluma Puts JavaScript On The RP2040

With a simple firmware update, Kaluma puts a lightweight JavaScript runtime on the Raspberry Pi Pico (which uses the RP2040 microcontroller), providing handy modules for file systems, graphics, networking, and more. Code for a simple LED blink can then look like:

// index.js
const led = 25;
pinMode(led, OUTPUT);
setInterval(() => {
digitalToggle(led);
}, 1000);

Development can then be done using tools that are very familiar to JavaScript developers, such as npm and flashing new code to a USB-connected Pico with the (Node.js-based) Kaluma command-line interface. Take a look at the GitHub repository for the project, or browse some of the projects made with Kaluma.

Much like with MicroPython, there’s value to be had in putting implementations of high-level languages on microcontrollers. Each new language opens embedded programming to a whole new group of coders. But it’s not just languages making their way to the RP2040. Wonderful projects such as emulating the ZX Spectrum on an RP2040 also happen.

Thanks to [Shri Hari Ram] for the tip!

Turing Complete Programming On ARM With Two Instructions

There are many questions that can be asked for software projects, with most of these questions starting with ‘Why…?’. This is true for the challenge of proving that cascading stylesheets are Turing-complete, or that you don’t need all those fancy ISA bits of an ARM processors when you already got the LDM and STM commands in the 32-bit ISA. What originally started off as a bit of a running gag in a group of developers led to [Kellan Clark] implementing a Turing-complete computer and a functioning interpreter using nothing but these two opcodes.

Adding some Brainfsck to your ARM, inside your GBA.
Adding some Brainf**k to your ARM, inside your GBA.

These two opcodes essentially allow the storing or reading of data into memory from any combination of the 16 general-purpose registers (GPRs). This makes them both extremely versatile and also extremely open to ‘abuse’ like in this example. For a straightforward implementation that could prove the concept, [Kellan] decided to pick one of everyone’s favorite esoteric programming languages: Brainf**k, creating the charmingly titled Armf**k that allows anyone to write BF programs for any suitable ARM processor, like the ARM7TDMI in the Game Boy Advance that [Kellan] targeted.

As a proof of concept it’s unquestioningly intriguing, and a great example of how the most powerful parts of any ISA are those that move data around. After all, as anyone who writes ASM and C knows, computers are just machines that can copy bytes around really fast to make stuff happen. Mind-blowing examples like these serve to illustrate that point quite well.

Tip kindly provided by [eeucalyptus].

Do Bounties Hurt FOSS?

As with many things in life, motivation is everything. This also applies to the development of software, which is a field that has become immensely important over the past decades. Within a commercial context, the motivation  to write software is primarily financial, in that a company’s products are developed by individuals who are being financially compensated for their time. This is often different with Free and Open Source Software (FOSS) projects, where the motivation to develop the software is in many cases derived more out of passion and sometimes a wildly successful hobby rather than any financial incentives.

Yet what if financial incentives are added by those who have a vested interest in seeing certain features added or changed in a FOSS project? While with a commercial project it’s clear (or should be) that the paying customers are the ones whose needs are to be met, with a volunteer-based FOSS project the addition of financial incentives make for a much more fuzzy system. This is where FOSS projects like the Zig programming language have put down their foot, calling FOSS bounties ‘damaging’.

Continue reading “Do Bounties Hurt FOSS?”

Processes, Threads, And… Fibers?

You’ve probably heard of multithreaded programs where a single process can have multiple threads of execution. But here is yet another layer of creating multitasking programs known as a fiber. [A Graphics Guy] lays it out in a lengthy but well-done post. There are examples for both x64 and arm64, although the post mainly focuses on x64 for Windows. However, the ideas will apply anywhere.

In the old days, there was a CPU and when your program ran on it, it was in control. But that’s wasteful, so software quickly moved to where many programs could share the CPU simultaneously. Then, as that got overloaded, computers got more CPUs. Most operating systems have the idea of a process, which is a program that thinks it is in complete control, but it is really sharing the CPU with other processes. The problem arises when you want to have multiple “little” programs that cooperate. Processes are not really supposed to know about one another and, if they do, there’s usually some heavy-weight communication mechanism allowing them to talk.

Continue reading “Processes, Threads, And… Fibers?”

Decker Is The Cozy Retro Creative Engine You Didn’t Know You Needed

[John Earnest]’s passion project Decker is creative software with a classic MacOS look (it’s not limited to running on Macs, however) for easily making and sharing interactive documents with sound, images, hypertext, scripted behavior, and more to allow making just about anything in a WYSIWYG manner.

Decker creates decks, which can be thought of as a stack of digital cards that link to one another. Each card in a deck can contain cozy 1-bit art, sound, interactive elements, scripted behavior, and a surprisingly large amount of other features.

Curious? Check out the Decker guided tour to get a peek at just what Decker is capable of. Then download it and prototype an idea, create a presentation, make a game, or just doodle some 1-bit art with nice tools. Continue reading “Decker Is The Cozy Retro Creative Engine You Didn’t Know You Needed”

The Python documentation for str.strip().

Faster String Processing With Bloom Filters

At first, string processing might seem very hard to optimize. If you’re looking for a newline in some text, you have to check every character in the string against every type of newline, right? Apparently not, as [Abhinav Upadhyay] tells us how CPython does some tricks in string processing.

The trick in question is based on bloom filters, used here to quickly tell whether a character possibly matches any in a predefined set. A bloom filter works by condensing a set of more complex data to a couple of bits in an array. When an element is added, a bit is set, the index of which is determined by a hash function. To test whether an element might be in the filter, the same is done but by testing the bit instead of setting it. This effectively allows a fast check of whether an element might be in the filter.

CPython doesn’t stop optimizing there: instead of a complicated hash function, it simply uses the lowest 6 bits. It also has a relatively small bit array at only 64 bits which allows it to avoid memory all together, which in turn makes the comparisons much faster. [Abhinav] goes far into more detail in his article, definitely worth a read for any computer scientists among us.

Nowadays there is ever increasing amounts of talk about AI (specifically large language models), so why not apply an LLM to Python to fix the bugs for you?