OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

Using VHDL To Generate Discrete Logic PCB Designs

VHDL and Verilog are hardware description languages, used to describe and define logic circuits. They’re typically used to design ASICs and to program FPGAs, essentially using software to define hardware. However, [Tim] has done something altogether quite creative, creating tools to take VHDL and Verilog and spit out PCB designs for discrete logic. 

Yes, you read that correctly. The basic idea is to take VHDL source code, and then make a PCB layout that implements the desired logic using resistor-transistor logic. From there, the PCB design files can be shipped off to a manufacturer for pick-and-place assembly at a fraction of the cost of producing a bespoke ASIC.

The drawbacks are obvious; tons of individual discrete parts are required, the size penalty is hilariously bad, and power usage is almost certainly orders of magnitude higher than doing the same logic on an ASIC or even FPGA. Oh, and everything’s much slower, too.

However, as an academic exercise or simply for fun, it’s an awesome bit of work. The idea that one can define a complicated logic circuit and have a PCB implementing the logic whipped up by automated tools is amazing, and we absolutely want to see more of this type of thing.

We’ve seen similar work done with VHDL synthesis into 74-series logic design. If you’ve been developing your own fancy digital-logic-fu, be sure to drop us a line!

[Thanks to Yann Guidon for the tip!]

M5Paper Gets Open Source Weather Display Firmware

We know you like soldering irons, we’re quite fond of them ourselves. But the reality is, modular components and highly capable development boards allow the modern hardware hacker to get things done with far less solder smoke then ever before. In fact, sometimes all you need to finish your project is the right code.

Case in point, check out the slick electronic paper weather display that [Danko Bertović] shows off in the latest Volos Projects video. While it certainly fits the description of a DIY project, he didn’t have to put any of the hardware together himself. The M5Paper is an ESP32 development kit designed around a crisp 4.7″, 960 x 540 e-paper panel that includes everything from environmental sensors to an internal 1150 mAh battery. To make your handheld e-paper dreams come true, the only thing you need to provide is the software.

The weather display code provided by [Danko] should certainly get you going in the right direction. Now don’t get us wrong, there’s certainly no shame in just flashing his code to the device and plunking it on your desk. It’s a gorgeous looking interface, and we all know that a sprinkling of open source code is often all it takes to make a standard consumer device extraordinary. But by using the code he’s provided as a launching point, you can take this turn-key device and really make it your own.

Continue reading “M5Paper Gets Open Source Weather Display Firmware”

Pumpkin OS running on x86

Palm OS: Reincarnate

[pmig96] loves PalmOS and has set about on the arduous task of reimplementing PalmOS from scratch, dubbing it Pumpkin OS. Pumpkin OS can run on x86 and ARM at native speed as it is not an emulator. System calls are trapped and intercepted by Pumpkin OS. Because it doesn’t emulate, Palm apps currently need to be recompiled for x86, though it’s hoped to support apps that use ARMlets soon. Since there are over 800 different system traps in PalmOS, he hasn’t implemented them all yet.

Generally speaking, his saving grace is that 80% of the apps only use 20% of the API. His starting point was a script that took the headers from the PalmOS SDK and converted them into functions with just a debug message letting him know that it isn’t implemented yet and a default return value. Additionally, [pmig96] is taking away some of the restrictions on the old PalmOS, such as being limited to only one running app at a time.

As if an x86 desktop version wasn’t enough, [pmig96] recompiled Pumpkin OS to a Raspberry Pi 4 with a ubiquitous 3.5″ 320×480 TFT SPI touch screen. Linux maps the TFT screen to a frame buffer (dev/fb0 or dev/fb1). He added a quick optimization to only draw areas that have changed so that the SPI writes could be kept small to keep the frame rate performance.

[pmig96] isn’t the only one trying to breathe some new life into PalmOS, and we hope to see more progress on PumpkinOS in the future.

The Linux X86 Journey To Main()

Have you ever had a program crash before your main function executes? it is rare, but it can happen. When it does, you need to understand what happens behind the scenes between the time the operating system starts your program and your first line of code in main executes. Luckily [Patrick Horgan] has a tutorial about the subject that’s very detailed. It doesn’t cover statically linked libraries but, as he points out, if you understand what he does cover, that’s easy to figure out on your own.

The operating system, it turns out, knows nothing about main. It does, however, know about a symbol called _start. Your runtime library provides this. That code contains some stack manipulation and eventually calls __libc_start_main which is also provided by the library. Continue reading “The Linux X86 Journey To Main()”

In Search Of The First Comment

Are you writing your code for humans or computers? I wasn’t there, but my guess is that at the dawn of computing, people thought that they were writing for the machines. After all, they were writing in machine language, and whatever bits they flipped into the electronic brain stayed in the electronic brain, unless punched out on paper tape. And the commands made the machine do things, not other people. Code was written strictly for computers.

Modern programming practice, on the other hand, is aimed firmly at people. Variable and function names are chosen to be long and to describe what they contain or do. “Readability” of code is a prized attribute. Indeed, sometimes the fact that it does the right thing at all almost seems to be an afterthought. (I kid!)

Somewhere along this path, there was an important evolutionary step, like the first fish using its flippers to walk on land. Comments were integrated into programming languages, formalizing the notes that coders of old surely wrote by hand in the margins of the paper first-drafts before keying it in. So I went looking for the missing link: the first computer language, and ideally the first program, with comments. I came up empty handed.

Or rather full handed. Every computer language that I could find had comments from the beginning. FORTRAN had comments, marked by a “C” as the first character in a line. APL had comments, marked by the bizarro rune ⍝. Even the custom language written for the Apollo 11 guidance computers had comments — the now-commonplace “#”. I couldn’t find an early programming language without comments.

My guess is that the first language with a comment must have been an assembly language, because I don’t know of any machines with a native comment instruction. (How cool and frivolous would that be?)

Assemblers simply translate mnemonic names to their machine instruction counterparts, but this gives them the important freedom to ignore anything starting with, traditionally, a semicolon. Even though you’re just transferring the contents of register X to the memory location pointed to in register Y, you can write that you’re “storing the height above ground (meters)” in the comments.

The crucial evolutionary step, though, is saving the comments along with the code. Simply ignoring everything that comes after the semicolon and throwing it away doesn’t count. Does anyone know? What was the first code to include comments as part of the code itself, and not simply as marginalia?

ua-parser-js compromised

Supply Chain Attack: NPM Library Used By Facebook And Others Was Compromised

Here at Hackaday we love the good kinds of hacks, but now and then we need to bring up a less good kind. Today it was learned that the NPM package ua-parser-js was compromised, and any software using it as a library may have become victim of a supply chain attack. What is ua-parser-js and why does any of this matter?

In the early days of computing, programmers would write every bit of code they used themselves. Larger teams would work together to develop larger code bases, but it was all done in-house. These days software developers don’t write every piece of code. Instead they use libraries of code supplied by others.

For better or worse, repositories of code are now available to do even the smallest of functions so that a developer doesn’t have to write the function from scratch. One such registry is npm (Node Package Manager), who organize a collection of contributed libraries written in JavaScript. One only need to use npm to include a library in their code, and all of the functions of that code are available to the developer. One such example is ua-parser-js which is a User Agent Parser written in JavaScript. This library makes it easy for developers to find out the type of device and software being used to access a web page.

On October 22 2021, the developer of ua-parser-js found that attackers had uploaded a version of his software that contained malware for both Linux and Windows computers. The malicious versions were found to steal data (including passwords and Chrome cookies, perhaps much more) from computers or run a crypto-currency miner. This prompted GitHub to issue a Critical Severity Security Advisory.

What makes this compromise so dangerous is that ua-parser-js is considered to be part of a supply chain, and has been adopted even by Facebook for use in some of its customer facing software. The developer of ua-parser-js has already secured his GitHub account and uploaded new versions of the package that are clean. If you have any software that uses this library, make sure you’ve got the latest version!

Of course this is by no means a unique occurrence. Last month Maya Posch dug into growing issues that come from some flaws of trust in package management systems. The art for that article is a house of cards, an apt metaphor for a system that is only as stable as the security of each and every package being built upon.