Tips For 3D Printing Watertight Test Tubes

[DaveMakesStuff] uses 3D printed test tubes for plants and similar purposes, and he’s shared how to make them on a 3D printer, complete with different models each optimized for different nozzle sizes.

The slots in the model are a means of manipulating how the slicer creates a toolpath when printing in spiral vase mode. These areas end up denser and stronger than they otherwise would be.

It’s not too hard to get clear-looking prints in spiral vase mode by using a transparent filament, but the real value in his design is that it comes out reliably watertight, with an extra-strong base and rim.

How is this accomplished when using spiral vase mode, which extrudes only a single wall perimeter? By using fancy geometry on the part, which makes the nozzle follow a high-density path that turns back onto itself multiple times, in concept a little like a switchback trail. The result is extra-dense areas on both the rim and the bottom of the tubes. This helps make them not only watertight, but far stronger than a single wall.

This technique is reminiscent of an earlier method we saw of enhancing the strength of vase mode prints by modeling thin slots into an object. After slicing, the model still consists of a single unbroken spiral extrusion. But in practice, the extruded plastic forms what resemble structural ribs. Why? Because those technically-adjacent extruded lines are so close to one another that they end up sticking together. Something similar is being done here by [DaveMakesStuff] to ensure that the bottom and top of the tubes are extra strong.

You can see a short video (embedded below) that showcases the tubes, as well as some modular 3D-printable racks that [DaveMakesStuff] also makes. And should you want some tips on getting better transparency from your 3D prints, the essentials boil down to printing with transparent filament, slightly hotter, and with a slightly higher extrusion rate.

Continue reading “Tips For 3D Printing Watertight Test Tubes”

Open Source Needs A New Mission: Protecting Users

[Bruce Perens] isn’t very happy with the current state of Free and Open Source Software (FOSS), and an article by [Rupert Goodwins] expounds on this to explain Open Source’s need for a new mission in 2024, and beyond. He suggests a focus shift from software, to data.

The internet as we know it and all the services it runs are built on FOSS architecture and infrastructure. None of the big tech companies would be where they are without FOSS, and certainly none could do without it. But FOSS has its share of what can be thought of as loopholes, and in the years during which the internet has exploded in growth and use, large tech companies have found and exploited all of them. A product doesn’t need to disclose a single line of source code if it’s never actually distributed. And Red Hat (which [Perens] asserts is really just IBM) have simply stopped releasing public distributions of CentOS.

In addition, the inherent weak points of FOSS remain largely the same. These include funding distributions, lack of user-focused design, and the fact that users frankly don’t understand what FOSS offers them, why it’s important, or even that it exists at all.

A change is needed, and it’s suggested that the time has come to move away from a focus on software, and shift that focus instead to data. Expand the inherent transparency of FOSS to ensure that people have control and visibility of their own data.

While the ideals of FOSS remain relevant, this isn’t the first time the changing tech landscape has raised questions about how things are done, like the intersection of bug bounties and FOSS.

What do you think? Let us know in the comments.

Adding AI To NPCs Is Easy, Doing It Well Is Hard

Adding natural language interfaces to software is easier than ever, and that led [creikey] to prototype a game that hinges on communicating with NPCs. The prototype went through multiple iterations during which he mainly discovered things that did not work well. Ultimately, it led to [creikey] settling on a western-themed game called Dante’s Cowboy which he hopes to release as an experiment. He begins talking about the game around the 4:43 mark in the video, which directly precedes a recording of a presentation he gives at as an indie developer.

Games typically revolve around the player manipulating entities in an environment in order to make things happen. This interaction drives engagement and interesting decisions. But while adding natural language AI to NPCs makes them easy to talk with, talking by itself is a shallow interaction. Convincing NPCs to do things? That’s complex and far more difficult to implement. [creikey] realized the limitations large language models (LLMs) had and worked to overcome them to make a unique game experience.

The challenges boil down to figuring out how to drive meaningful interaction, aligning AI behavior with the gameplay context, and managing API costs. In his words, “it’s been a learning experience to figure out where [natural language AI] even belongs in a game, if it belongs at all.”

We’ve previously seen ChatGPT used to grant NPCs the ability to communicate naturally which is a fascinating tech demo, but gameplay-wise can boil down to being a complicated alternative to pressing a button. As [creikey] discovered, adding this technology into games in a way that feels meaningful takes a new kind of work.

Continue reading “Adding AI To NPCs Is Easy, Doing It Well Is Hard”

Gyro-Controlled Labyrinth Game Outputs To VGA

This gesture-controlled labyrinth game using two Raspberry Pi Pico units does a great job of demonstrating how it can sometimes take a lot of work to make something look simple.

To play, one tilts an MPU6050 inertial measurement unit (IMU) attached to one Pico to guide a square through a 2D maze, with the player working through multiple levels of difficulty. A second Pico takes care of displaying the game state on a VGA monitor, and together they work wirelessly to deliver a coherent experience with the right “feel”. This includes low latency, simulating friction appropriately, and more.

Taking a stream of raw sensor readings and turning them into control instructions over UDP in a way that feels intuitive while at the same time generating a VGA display signal has a lot of moving parts, software-wise. The project write-up has a considerable amount of detail on the architecture of the system, and the source code is available on GitHub for those who want a closer look.

We’ve seen gesture controls interfaced to physical marble mazes before, but two Raspberry Pi Picos doing it wirelessly with a VGA monitor for feedback is pretty neat. Watch it in action in the video, embedded just under the page break.

Continue reading “Gyro-Controlled Labyrinth Game Outputs To VGA”

Air Hockey Table Embraces DOOM, Retro Gaming

[Chris Downing] recently finished up a major project that spanned some two years and used nearly every skill he possessed. The result? A smart air hockey table with retro-gaming roots. Does it play DOOM? It sure (kind of) does!

Two of the most striking features are the score board (with LCD screen and sound) and the play surface which is densely-populated with RGB LED lighting and capable of some pretty neat tricks. Together, they combine to deliver a few different modes of play, including a DOOM mode.

The first play mode is straight air hockey with automated score tracking and the usual horns and buzzers celebrating goals. The LED array within the table lights up to create the appearance and patterns of a typical hockey rink.

DOOM hockey mode casts one player as Demons and the other as the Doom Slayer, and the LED array comes to life to create a play surface of flickering flames. Screams indicate goals (either Demon screams or Slayer screams, depending on who scores!)

In retrogaming emulation mode, the tabletop mirrors the screen.

Since the whole thing is driven by a Raspberry Pi, the table is given a bit of gaming flexibility with Emulation Mode. This mode allows playing emulated retro games on the scoreboard screen, and as a super neat feature, the screen display is mirrored on the tabletop’s LED array. [Chris] asserts that the effect is imperfect, but to us it looks at least as legible as DOOM on 7-segment displays.

This project is a great example of how complex things can get when one combines so many different types of materials and fabrication methods into a single whole. The blog post has a lot of great photos and details, but check out the video (embedded below) for a demonstration of everything in action. Continue reading “Air Hockey Table Embraces DOOM, Retro Gaming”

Quivering Facehugger Is All Geared Up

[Jason Winfield] shared with us a video describing a project with a lot of personality: a mounted, lit, and quivering Alien facehugger triggered by motion. The end result is a delightful jump scare, and the Raspberry Pi that controls everything also captures people’s reactions.

It starts with a little twitch when motion is sensed, then launches into a perfectly unsettling quiver combined with light and sound. We particularly like the wave-like effect from the LED lighting, which calls to mind illumination from rotating hazard beacons.

The unit looks like a mounted and tastefully-lit static model, but is actually primed to sense motion.

One challenge was how to efficiently move the legs. Rather than use a motor for each limb, [Jason] settled on a single motor driving a rotating cam arrangement. You can see the results for yourself in the video below, but getting there was not simple.

The surplus motor [Jason] chose is thin and high-torque, but runs extremely fast. Since he wanted the legs to quiver creepily rather than vibrate, something needed to be done to mitigate this.

The solution is a planetary gear assembly that drives a rotating ring and cam arrangement coupled to the facehugger’s legs. There’s only one motor, but the effect is that each leg’s motion is independent of the others. The whole assembly is quite slim, and everything is contained within the frame.

Facehuggers and gear assemblies are not exactly an everyday combination, but believe it or not this isn’t the first time the two have joined forces. Check out the Aliens-themed cuckoo clock, complete with crew member torso and emerging chestburster!

Continue reading “Quivering Facehugger Is All Geared Up”

Sound-Reactive Light Saber Flips Allegiance Via Vowel Sounds

Students [Berk Gokmen] and [Justin Green] developed an RP2040-based LED-illuminated lightsaber as a final project with a bit of a twist. It has two unusual sound-reactive modes: disco mode, and vowel detection mode.

Switching allegiances (or saber color, at least) is only a sound away.

Disco mode alters the color of the saber dynamically in response to incoming sounds. Color and brightness are altered in response to incoming frequencies picked up by the on-board microphone, making a dynamic light show that responds particularly well to music.

The second mode is vowel detection, and changes the lightsaber’s color depending on spoken sounds. The “ee” sound makes the saber red, and the “ah” sound turns it blue. This method requires a lot of processing and filtering, and in the end it works, but is quite dependent on individual speakers for calibration.

The sound functionality centers around FFTs (Fast Fourier Transforms) which are fundamental to processing signals like audio in a meaningful way, and is a method accessible to embedded devices like microcontrollers with ADCs.

The lightsaber is battery-powered and wireless, and there are loads of details about the finer points of the design (including challenges and tradeoffs) on the project page, and the source code is available on GitHub. A video demonstration and walkthrough is embedded below.

Continue reading “Sound-Reactive Light Saber Flips Allegiance Via Vowel Sounds”