Make Your ESP32 Talk Like It’s The 80s Again

80s-era electronic speech certainly has a certain retro appeal to it, but it can sometimes be a useful data output method since it can be implemented on very little hardware. [luc] demonstrates this with a talking thermometer project that requires no display and no special hardware to communicate temperatures to a user.

Back in the day, there were chips like the Votrax SC-01A that could play phonemes (distinct sounds that make up a language) on demand. These would be mixed and matched to create identifiable words, in that distinctly synthesized Speak & Spell manner that is so charming-slash-uncanny.

Software-only speech synthesis isn’t new, but it’s better now than it was in Atari’s day.

Nowadays, even hobbyist microcontrollers have more than enough processing power and memory to do a similar job entirely in software, which is exactly what [luc]’s talking thermometer project does. All this is done with the Talkie library, originally written for the Arduino and updated for the ESP32 and other microcontrollers. With it, one only needs headphones or a simple audio amplifier and speaker to output canned voice data from a project.

[luc] uses it to demonstrate how to communicate to a user in a hands-free manner without needing a display, and we also saw this output method in an electric unicycle which had a talking speedometer (judged to better allow the user to keep their eyes on the road, as well as minimizing the parts count.)

Would you like to listen to an authentic, somewhat-understandable 80s-era text-to-speech synthesizer? You’re in luck, because we can show you an authentic vintage MicroVox unit in action. Give it a listen, and compare it to a demo of the Talkie library in the video below.

Continue reading “Make Your ESP32 Talk Like It’s The 80s Again”

Testing Part Stiffness? No Need To Re-invent The Bending Rig

If one is serious about testing the stiffness of materials or parts, there’s nothing quite like doing your own tests. And thanks to [JanTec]’s 3-Point Bending Test rig, there’s no need to reinvent the wheel should one wish to do so.

The dial caliper can be mounted to a fixed height, thanks to a section of 3030 T-slot extrusion.

Some simple hardware, a couple spare pieces of 3030 T-slot extrusion, a few 3D-printed parts, and a dial indicator all come together to create a handy rig that will let one get straight to measuring.

Here is how it works: stiffness of a material is measured by placing a sample between two points and applying a known force to the middle of the sample. This will cause the material to bend, and measuring how far a standardized sample deforms under a known amount of force (normally accomplished by a dial indicator) is how one can quantify a material’s stiffness.

When a material talks about its Young’s modulus (E) value, it’s talking about stiffness. A low Young’s modulus means a material is more elastic, a high value means the material is more stiff. (This shouldn’t be confused with strength or toughness, which are more about resistance to non-recoverable deformation, and resistance to fracture, respectively.)

Interested in results, but don’t want to get busy doing your own testing? Someone’s already been there and done that: here’s a great roundup of measurements of 3D-printed parts, using different filaments.

Messing With A Cassette Player Never Sounded So Good

Cassette players and tapes are fertile hacking ground. One reason is that their electromechanical and analog nature provides easy ways to fiddle with their operation. For example, slow down the motor and the playback speed changes accordingly. As long as the head is moving across the tape, sound will be produced. The hacking opportunities are nicely demonstrated by [Lara Grant]’s cassette player mod project.

The device piggybacks onto a battery-powered audio cassette player and provides a variety of ways to fiddle with the output, including adjustable echo and delay, and speed control. At the heart of the delay and echo functionality is the PT2399, a part from the late 90s capable of some pretty impressive audio effects (as long as a supporting network of resistors and capacitors are in place, anyway.)

[Lara] provides a schematic for the PT2399’s interface to the cassette player’s output, which is handy should anyone want to try a similar modification. Speed of playback is controlled by adjusting the cassette player’s motor with PWM. Volume control swaps a photocell in place of a rotary volume potentiometer, and additional audio jacks provide flexibility for mixing and matching input and output with other equipment.

You can see it in action in the video embedded below. Intrigued, and want a few more examples of modified tape players? How about a strange sort of cassette synth, or this unique take on a mellotron that uses a whopping 14 modified tape players under the hood? And really out there is the Magnetotron, which consists of a large rotating cylinder with tape loops stuck to it — the magnetic read head is mounted on a wand which the user manually moves across the tapes to create sounds.

Tape players are accessible, hackable things, so remember to drop us a line if you make something neat!

Continue reading “Messing With A Cassette Player Never Sounded So Good”

Chatting With Local AI Moves Directly In-Browser, Thanks To Web LLM

Large Language Models (LLM) are at the heart of natural-language AI tools like ChatGPT, and Web LLM shows it is now possible to run an LLM directly in a browser. Just to be clear, this is not a browser front end talking via API to some server-side application. This is a client-side LLM running entirely in the browser.

The ability to run an LLM (natural language AI) directly in-browser means more ways to implement local AI while enjoying GPU acceleration via WebGPU.

Running an AI system like an LLM locally usually leverages the computational abilities of a graphics card (GPU) to accelerate performance. This is true when running an image-generating AI system like Stable Diffusion, and it’s also true when implementing a local copy of an LLM like Vicuna (which happens to be the model implemented by Web LLM.) The thing that made Web LLM possible is WebGPU, whose release we covered just last month.

WebGPU provides a way for an in-browser application to talk to a local GPU directly, and it sure didn’t take long for someone to get the idea of using that to get a local LLM to run entirely within the browser, complete with GPU acceleration. This approach isn’t just limited to language models, either. The same method has been applied to successfully create Web Stable Diffusion as well.

It’s a fascinating (and fast) development that opens up new possibilities and, hopefully, gives people some new ideas. Check out Web LLM’s GitHub repository for a closer look, as well as access to an online demo.

3D Scanning A Room With A Steam Deck And A Kinect

It may not be obvious, but Valve’s Steam Deck is capable of being more than just a games console. Demonstrating this is [Parker Reed]’s experiment in 3D scanning his kitchen with a Kinect and Steam Deck combo, and viewing the resulting mesh on the Steam Deck.

The two pieces of hardware end up needing a lot of adapters and cables.

[Parker] runs the RTAB-Map software package on his Steam Deck, which captures a point cloud and color images while he pans the Kinect around. After that, the Kinect’s job is done and he can convert the data to a mesh textured with the color images. RTAB-Map is typically used in robotic applications, but we’ve seen it power completely self-contained DIY 3D scanners.

While logically straightforward, the process does require some finessing and fiddling to get it up and running. Reliability is a bit iffy thanks to the mess of cables and adapters required to get everything hooked up, but it does work. [Parker] shows off the whole touchy process, but you can skip a little past the five minute mark if you just want to see the scanning in action.

The Steam Deck has actual computer chops beneath its games console presentation, and we’ve seen a Steam Deck appear as a USB printer that saves received print jobs as PDFs, and one has even made an appearance in radio signal direction finding.

Continue reading “3D Scanning A Room With A Steam Deck And A Kinect”

Wolfram Alpha With ChatGPT Looks Like A Killer Combo

Ever looked at Wolfram Alpha and the development of Wolfram Language and thought that perhaps Stephen Wolfram was a bit ahead of his time? Well, maybe the times have finally caught up because Wolfram plus ChatGPT looks like an amazing combo. That link goes to a long blog post from Stephen Wolfram that showcases exactly how and why the two make such a wonderful match, with loads of examples. (If you’d prefer a video discussion, one is embedded below the page break.)

OpenAI’s ChatGPT is a large language model (LLM) neural network, or more conventionally, an AI system capable of conversing in natural language. Thanks to a recently announced plugin system, ChatGPT can now interact with remote APIs and therefore use external resources.

ChatGPT’s natural language processing ability enables some pretty impressive interactions with Wolfram, enabling the kind of exchange you see here (click to enlarge.)

This is meaningful because LLMs are very good at processing natural language and generating plausible-sounding output, but whether or not the output is factually correct can be another matter. It’s not so much that ChatGPT is especially prone to confabulation, it’s more that the nature of an LLM neural network makes it difficult to ask “why exactly did you come up with your answer, and not something else?” In addition, asking ChatGPT to do things like perform nontrivial calculations is a bit of a square peg and round hole situation.

So how does the Wolfram plugin change that? When asked to produce data or perform computations, ChatGPT can now hand it off to Wolfram Alpha instead of attempting to generate the answer by itself.  Both sides use their strengths in this arrangement. First, ChatGPT interprets the user’s question and formulates it as a query, which is then sent to Wolfram Alpha for computation, and ChatGPT structures its response based on what it got back. In short, ChatGPT can now ask for help to get data or perform a computation, and it can show the receipts when it does.

Continue reading “Wolfram Alpha With ChatGPT Looks Like A Killer Combo”

3D-Printable Foaming Nozzle Shows How They Work

[Jack]’s design for a 3D-printable foaming nozzle works by mixing air with a fluid like liquid soap or hand sanitizer. This mixture gets forced through what looks like layers of fine-mesh sieve and eventually out the end by squeezing the bottle. The nozzle has no moving parts but does have an interesting structure to make this possible.

The fine meshes are formed by multiple layers of bridged filament.

Creating a foam with liquid soap requires roughly one part soap to nine parts air. The idea is that the resulting foam makes more efficient use of the liquid soap compared to dispensing an un-lathered goop directly onto one’s hands.

The really neat part is that the fine mesh structure inside the nozzle is created by having the printer stretch multiple layers of filament across the open span on the inside of the model. This is a technique similar to that used for creating bristles on 3D-printed brushes.

While this sort of thing may require a bit of expert tweaking to get the best results, it really showcases the way the fundamentals of how filament printers work. Once one knows the process, it can be exploited to get results that would be impossible elsewhere. Here are a few more examples of that: printing only a wall’s infill to allow airflow, manipulating “vase mode” to create volumes with structural ribs, and embedding a fine fabric mesh (like tulle) as either a fan filter or wearable and flexible armor. Everything’s got edge cases, and clever people can do some pretty neat things with them (when access isn’t restricted, that is.)