Open Source Needs A New Mission: Protecting Users

[Bruce Perens] isn’t very happy with the current state of Free and Open Source Software (FOSS), and an article by [Rupert Goodwins] expounds on this to explain Open Source’s need for a new mission in 2024, and beyond. He suggests a focus shift from software, to data.

The internet as we know it and all the services it runs are built on FOSS architecture and infrastructure. None of the big tech companies would be where they are without FOSS, and certainly none could do without it. But FOSS has its share of what can be thought of as loopholes, and in the years during which the internet has exploded in growth and use, large tech companies have found and exploited all of them. A product doesn’t need to disclose a single line of source code if it’s never actually distributed. And Red Hat (which [Perens] asserts is really just IBM) have simply stopped releasing public distributions of CentOS.

In addition, the inherent weak points of FOSS remain largely the same. These include funding distributions, lack of user-focused design, and the fact that users frankly don’t understand what FOSS offers them, why it’s important, or even that it exists at all.

A change is needed, and it’s suggested that the time has come to move away from a focus on software, and shift that focus instead to data. Expand the inherent transparency of FOSS to ensure that people have control and visibility of their own data.

While the ideals of FOSS remain relevant, this isn’t the first time the changing tech landscape has raised questions about how things are done, like the intersection of bug bounties and FOSS.

What do you think? Let us know in the comments.

Adding AI To NPCs Is Easy, Doing It Well Is Hard

Adding natural language interfaces to software is easier than ever, and that led [creikey] to prototype a game that hinges on communicating with NPCs. The prototype went through multiple iterations during which he mainly discovered things that did not work well. Ultimately, it led to [creikey] settling on a western-themed game called Dante’s Cowboy which he hopes to release as an experiment. He begins talking about the game around the 4:43 mark in the video, which directly precedes a recording of a presentation he gives at as an indie developer.

Games typically revolve around the player manipulating entities in an environment in order to make things happen. This interaction drives engagement and interesting decisions. But while adding natural language AI to NPCs makes them easy to talk with, talking by itself is a shallow interaction. Convincing NPCs to do things? That’s complex and far more difficult to implement. [creikey] realized the limitations large language models (LLMs) had and worked to overcome them to make a unique game experience.

The challenges boil down to figuring out how to drive meaningful interaction, aligning AI behavior with the gameplay context, and managing API costs. In his words, “it’s been a learning experience to figure out where [natural language AI] even belongs in a game, if it belongs at all.”

We’ve previously seen ChatGPT used to grant NPCs the ability to communicate naturally which is a fascinating tech demo, but gameplay-wise can boil down to being a complicated alternative to pressing a button. As [creikey] discovered, adding this technology into games in a way that feels meaningful takes a new kind of work.

Continue reading “Adding AI To NPCs Is Easy, Doing It Well Is Hard”

Gyro-Controlled Labyrinth Game Outputs To VGA

This gesture-controlled labyrinth game using two Raspberry Pi Pico units does a great job of demonstrating how it can sometimes take a lot of work to make something look simple.

To play, one tilts an MPU6050 inertial measurement unit (IMU) attached to one Pico to guide a square through a 2D maze, with the player working through multiple levels of difficulty. A second Pico takes care of displaying the game state on a VGA monitor, and together they work wirelessly to deliver a coherent experience with the right “feel”. This includes low latency, simulating friction appropriately, and more.

Taking a stream of raw sensor readings and turning them into control instructions over UDP in a way that feels intuitive while at the same time generating a VGA display signal has a lot of moving parts, software-wise. The project write-up has a considerable amount of detail on the architecture of the system, and the source code is available on GitHub for those who want a closer look.

We’ve seen gesture controls interfaced to physical marble mazes before, but two Raspberry Pi Picos doing it wirelessly with a VGA monitor for feedback is pretty neat. Watch it in action in the video, embedded just under the page break.

Continue reading “Gyro-Controlled Labyrinth Game Outputs To VGA”

Air Hockey Table Embraces DOOM, Retro Gaming

[Chris Downing] recently finished up a major project that spanned some two years and used nearly every skill he possessed. The result? A smart air hockey table with retro-gaming roots. Does it play DOOM? It sure (kind of) does!

Two of the most striking features are the score board (with LCD screen and sound) and the play surface which is densely-populated with RGB LED lighting and capable of some pretty neat tricks. Together, they combine to deliver a few different modes of play, including a DOOM mode.

The first play mode is straight air hockey with automated score tracking and the usual horns and buzzers celebrating goals. The LED array within the table lights up to create the appearance and patterns of a typical hockey rink.

DOOM hockey mode casts one player as Demons and the other as the Doom Slayer, and the LED array comes to life to create a play surface of flickering flames. Screams indicate goals (either Demon screams or Slayer screams, depending on who scores!)

In retrogaming emulation mode, the tabletop mirrors the screen.

Since the whole thing is driven by a Raspberry Pi, the table is given a bit of gaming flexibility with Emulation Mode. This mode allows playing emulated retro games on the scoreboard screen, and as a super neat feature, the screen display is mirrored on the tabletop’s LED array. [Chris] asserts that the effect is imperfect, but to us it looks at least as legible as DOOM on 7-segment displays.

This project is a great example of how complex things can get when one combines so many different types of materials and fabrication methods into a single whole. The blog post has a lot of great photos and details, but check out the video (embedded below) for a demonstration of everything in action. Continue reading “Air Hockey Table Embraces DOOM, Retro Gaming”

Quivering Facehugger Is All Geared Up

[Jason Winfield] shared with us a video describing a project with a lot of personality: a mounted, lit, and quivering Alien facehugger triggered by motion. The end result is a delightful jump scare, and the Raspberry Pi that controls everything also captures people’s reactions.

It starts with a little twitch when motion is sensed, then launches into a perfectly unsettling quiver combined with light and sound. We particularly like the wave-like effect from the LED lighting, which calls to mind illumination from rotating hazard beacons.

The unit looks like a mounted and tastefully-lit static model, but is actually primed to sense motion.

One challenge was how to efficiently move the legs. Rather than use a motor for each limb, [Jason] settled on a single motor driving a rotating cam arrangement. You can see the results for yourself in the video below, but getting there was not simple.

The surplus motor [Jason] chose is thin and high-torque, but runs extremely fast. Since he wanted the legs to quiver creepily rather than vibrate, something needed to be done to mitigate this.

The solution is a planetary gear assembly that drives a rotating ring and cam arrangement coupled to the facehugger’s legs. There’s only one motor, but the effect is that each leg’s motion is independent of the others. The whole assembly is quite slim, and everything is contained within the frame.

Facehuggers and gear assemblies are not exactly an everyday combination, but believe it or not this isn’t the first time the two have joined forces. Check out the Aliens-themed cuckoo clock, complete with crew member torso and emerging chestburster!

Continue reading “Quivering Facehugger Is All Geared Up”

Sound-Reactive Light Saber Flips Allegiance Via Vowel Sounds

Students [Berk Gokmen] and [Justin Green] developed an RP2040-based LED-illuminated lightsaber as a final project with a bit of a twist. It has two unusual sound-reactive modes: disco mode, and vowel detection mode.

Switching allegiances (or saber color, at least) is only a sound away.

Disco mode alters the color of the saber dynamically in response to incoming sounds. Color and brightness are altered in response to incoming frequencies picked up by the on-board microphone, making a dynamic light show that responds particularly well to music.

The second mode is vowel detection, and changes the lightsaber’s color depending on spoken sounds. The “ee” sound makes the saber red, and the “ah” sound turns it blue. This method requires a lot of processing and filtering, and in the end it works, but is quite dependent on individual speakers for calibration.

The sound functionality centers around FFTs (Fast Fourier Transforms) which are fundamental to processing signals like audio in a meaningful way, and is a method accessible to embedded devices like microcontrollers with ADCs.

The lightsaber is battery-powered and wireless, and there are loads of details about the finer points of the design (including challenges and tradeoffs) on the project page, and the source code is available on GitHub. A video demonstration and walkthrough is embedded below.

Continue reading “Sound-Reactive Light Saber Flips Allegiance Via Vowel Sounds”

Using Local AI On The Command Line To Rename Images (And More)

We all have a folder full of images whose filenames resemble line noise. How about renaming those images with the help of a local LLM (large language model) executable on the command line? All that and more is showcased on [Justine Tunney]’s bash one-liners for LLMs, a showcase aimed at giving folks ideas and guidance on using a local (and private) LLM to do actual, useful work.

This is built out from the recent llamafile project, which turns LLMs into single-file executables. This not only makes them more portable and easier to distribute, but the executables are perfectly capable of being called from the command line and sending to standard output like any other UNIX tool. It’s simpler to version control the embedded LLM weights (and therefore their behavior) when it’s all part of the same file as well.

One such tool (the multi-modal LLaVA) is capable of interpreting image content. As an example, we can point it to a local image of the Jolly Wrencher logo using the following command:

llava-v1.5-7b-q4-main.llamafile --image logo.jpg --temp 0 -e -p '### User: The image has...\n### Assistant:'

Which produces the following response:

The image has a black background with a white skull and crossbones symbol.

With a different prompt (“What do you see?” instead of “The image has…”) the LLM even picks out the wrenches, but one can already see that the right pieces exist to do some useful work.

Check out [Justine]’s rename-pictures.sh script, which cleverly evaluates image filenames. If an image’s given filename already looks like readable English (also a job for a local LLM) the image is left alone. Otherwise, the picture is fed to an LLM whose output guides the generation of a new short and descriptive English filename in lowercase, with underscores for spaces.

What about the fact that LLM output isn’t entirely predictable? That’s easy to deal with. [Justine] suggests always calling these tools with the --temp 0 parameter. Setting the temperature to zero makes the model deterministic, ensuring that a same input always yields the same output.

There’s more neat examples on the Bash One-Liners for LLMs that demonstrate different ways to use a local LLM that lives in a single-file executable, so be sure to give it a look and see if you get any new ideas. After all, we have previously shown how automating tasks is almost always worth the time invested.