I Installed Gentoo So You Don’t Havtoo

A popular expression in the Linux forums nowadays is noting that someone “uses Arch btw”, signifying that they have the technical chops to install and use Arch Linux, a distribution designed to be cutting edge but that also has a reputation of being for advanced users only. Whether this meme was originally posted seriously or was started as a joke at the expense of some of the more socially unaware Linux users is up for debate. Either way, while it is true that Arch can be harder to install and configure than something like Debian or Fedora, thanks to excellent documentation and modern (but optional) install tools it’s no longer that much harder to run than either of these popular distributions.

For my money, the true mark of a Linux power user is the ability to install and configure Gentoo Linux and use it as a daily driver or as a way to breathe life into aging hardware. Gentoo requires much more configuration than any mainline distribution outside of things like Linux From Scratch, and has been my own technical white whale for nearly two decades now. I was finally able to harpoon this beast recently and hope that my story inspires some to try Gentoo while, at the same time, saving others the hassle.

A Long Process, in More Ways Than One

My first experience with Gentoo was in college at Clemson University in the late ’00s. The computing department there offered an official dual-boot image for any university-supported laptop at the time thanks to major effort from the Clemson Linux User Group, although the image contained the much-more-user-friendly Ubuntu alongside Windows. CLUG was largely responsible for helping me realize that I had options outside of Windows, and eventually I moved completely away from it and began using my own Linux-only installation. Being involved in a Linux community for the first time had me excited to learn about Linux beyond the confines of Ubuntu, though, and I quickly became the type of person featured in this relevant XKCD. So I fired up an old Pentium 4 Dell desktop that I had and attempted my first Gentoo installation.

For the uninitiated, the main thing that separates Gentoo from most other distributions is that it is source-based, meaning that users generally must compile the source code for all the software they want to use on their own machines rather than installing pre-compiled binaries from a repository. So, for a Gentoo installation, everything from the bootloader to the kernel to the desktop to the browser needs to be compiled when it is installed. This can take an extraordinary amount of time especially for underpowered machines, although its ability to customize compile options means that the ability to optimize software for specific computers will allow users to claim that time back when the software is actually used. At least, that’s the theory. Continue reading “I Installed Gentoo So You Don’t Havtoo”

Reusing An Old Android Phone For GPIO With External USB Devices

Each year millions of old smartphones are either tossed as e-waste or are condemned to lie unloved in dusty drawers, despite the hardware in them usually being still perfectly fine. Reusing these little computers for another purpose once the phone’s manufacturer drops support is made hard by a range of hardware and software (driver) issues. One possible way to do so is suggested by [Doctor Volt] in a video where a Samsung Galaxy S4 is combined with a USB-connected FT232R board to add external GPIO.

The idea is pretty simple: the serial adapter is recognized by the existing Android OS and within the standard Android development environment this module can be used. Within this demonstrator it’s merely used to blink some LEDs and react to inputs, but it shows how to reuse one of these phones in a non-destructive manner. Even better is that the phone’s existing sensors and cameras can still be used as normal in this way, too, which opens a whole range of (cheap) DIY projects that can be programmed either in Java/Kotlin or in C or C++ via the Native Development Kit.

The only wrinkle is that while the phone is connected like this, charging is not possible. For the S4 it’s easy to solve as it has a removable battery, so an external power input was wired in with a dummy battery-sized bit of perfboard. With modern phones without removable batteries simultaneous USB/audio dongle and charging usage via the USB-C connector is claimed to be possible, but this is something to check beforehand.

Continue reading “Reusing An Old Android Phone For GPIO With External USB Devices”

Three 3D printed, spring loaded contraptions sit on a wooden shield. There are arrow shafts connected to the end and a piece of monofilament fishing line extending away from them and through a small eyelet at the edge of. the shield.

How To Shoot Actors With Arrows Sans CGI

Today, movie effects are mostly done in CGI, especially if they’re of the death-defying type. [Tyler Bell] shows us how they shot actors with arrows before CGI.

Almost every medieval movie has someone getting shot with an arrow, but how do you do that non-destructively? [Bell] shows us two primary methods that were used, the pop up rig and steel pronged arrows. The pop up rig is a spring loaded device with one end of an arrow attached that pops up when a mechanism is triggered. [Bell] 3D printed his own version of the mechanism and shows us how it can be used to great effect on shots from the side or rear of the victim.

But what about straight on shots where the rig would be blatantly obvious? That’s when you get to actually shoot the actor (or their stunt double anyway). To do this safely, actors would wear wooden body armor under their costumes and arrows with two small prongs would be shot along a wire into the desired impact site. We appreciate [Bell] using a mannequin for testing before letting his brother shoot him with an arrow. That’s definitely the next level above a trust fall.

We even get a look at using air cannons to launch arrow storms at the end which is particularly epic. Looking for more movie magic? How about the effects from King Kong or Flight of the Navigator?

Thanks to [Xerxes3rd] on Discord for the tip!

Continue reading “How To Shoot Actors With Arrows Sans CGI”

Building A Discrete 14-Bit String DAC

The discrete 14-bit DAC under test. (Credit: Sine Lab, YouTube)
The discrete 14-bit DAC under test. (Credit: Sine Lab, YouTube)

How easy is it to build your own Digital to Analog Converter (DAC)? Although you can readily purchase a wide variety of DACs these days, building your own can be very instructive, as the [Sine Lab] on YouTube explores in a recent video with the construction of a discrete 14-bit DAC. First there are the different architectures you can pick for a DAC, which range from R-2R (resistor ladder) to delta-sigma versions, each having its own level of complexity and providing different response times, accuracy and other characteristics.

The architecture that the [Sine Lab] picked was a String DAC with interpolator. The String type DAC has the advantage of having inherently monotonic output voltage and better switching-induced glitch performance than the R-2R DAC. At its core it still uses resistors and switches (transistors), with the latter summing up the input digital value. This makes adding more bits to the DAC as easy as adding more of these same resistors and switches, the only question is how many. In the case of a String DAC that’d be 2N, which implies that you want to use multiple strings, as in the above graphic.

Scaling this up to 16-bit would thus entail 65,536 resistors/switches in the naive approach, or with 2 8-bit strings 513 switches, 512 resistors and 2 buffers. In the actual design in the video both MOSFETs and 74HCT4051 multiplexers were used, which also necessitated creating two buses per string to help with the input decoding. This is the part where things get serious in the video, but the reasoning for each change and addition is explained clearly as the full 6-bit DAC with interpolator is being designed and built.

One big issue with discrete DACs comes when you have to find matching MOSFETs and similar, which is where LSI DACs are generally significantly more precise. Even so, this discrete design came pretty close to a commercial offering, which is pretty impressive.

Continue reading “Building A Discrete 14-Bit String DAC”

GNSS Reception With Clone SDR Board

We love seeing the incredible work many RF enthusiasts manage to pull off — they make it look so easy! Though RF can be tricky, it’s not quite the voodoo black art that it’s often made out to be. Many radio protocols are relatively simple and with tools like gnuradio and PocketSDR you can quickly put together a small system to receive and decode just about anything.

[Jean-Michel] wanted to learn more about GNSS and USB communication. Whenever you start a project like this, it’s a good idea to take a look around at existing projects for designs or code you can reuse, and in this case, the main RF front-end board is taken from the PocketSDR project. This is then paired with a Cypress FX2 development board, and he re-wrote almost all of the PocketSDR code so that it would compile using sdcc instead of the proprietary Keil compiler. Testing involved slowly porting the code while learning about using Python 3 to receive data over USB, and using other equipment to simulate antenna diversity (using multiple antennas to increase the signal-to-noise ratio): Continue reading “GNSS Reception With Clone SDR Board”

Hackaday Links Column Banner

Hackaday Links: November 3, 2024

“It was the best of times, it was the blurst of times?” Perhaps not anymore, if this Ig Nobel-worthy analysis of the infinite monkey theorem is to be believed. For the uninitiated, the idea is that if you had an infinite number of monkeys randomly typing on an infinite number of keyboards, eventually the complete works of Shakespeare or some other famous writer would appear. It’s always been meant to be taken figuratively as a demonstration of the power of time and randomness, but some people just can’t leave well enough alone. The research, which we hope was undertaken with tongue firmly planted in cheek, reveals that it would take longer than the amount of time left before the heat death of the universe for either a single monkey or even all 200,000 chimpanzees in the world today to type the 884,647 words of Shakespeare’s complete works in the proper order.

Continue reading “Hackaday Links: November 3, 2024”

All You Need For Artificial Intelligence Is A Commodore 64

Artificial intelligence has always been around us, with [Timothy J. O’Malley]’s 1985 book on AI projects for the Commodore 64 being one example of this. With AI defined as being the theory and development of systems that can perform tasks that normally requiring human intelligence (e.g. visual perception, speech recognition, decision-making), this book is a good introduction to the many ways that computer systems for decades now have been able to learn, make decisions and in general become more human-like. Even if there’s no electronic personality behind the actions.

In the book’s first chapter, [Timothy] isn’t afraid to toss in some opinions about the true nature of intelligence and thinking. Starting with the concept that intelligence is based around storing information and being able to derive meaning from connections between stored pieces of information, the idea of a basic AI as one would use in a game for the computer opponent arises. A number of ways of implementing such an AI is explored in the first and subsequent chapters, using Towers of Hanoi, chess, Nim and other games.

After this we look at natural language processing – referencing ELIZA as an example – followed by heuristics, pattern recognition and AI for robotics. Although much of this may seem outdated in this modern age of LLMs and neural networks, it’s important to realize that much of what we consider ‘bleeding edge’ today has its roots in AI research performed in the 1950s and 1960s. As [Timothy] rightfully states in the final chapter, there is no real limit to how far you can push this type of AI as long as you have more hardware and storage to throw at the problem. This is where we now got datacenters full of GPU-equipped systems churning through vector space calculations for the sake of today’s LLM & diffusion model take on ‘AI’.

Using a Commodore 64 to demonstrate the (lack of) validity of claims is not a new one, with recently a group of researchers using one of these breadbin marvels to run an Ising model with a tensor network and outperforming IBM’s quantum processor. As they say, just because it’s new and shiny doesn’t necessarily mean that it is actually better.