An Improved Spectrometer, No Lasers Required

Here at Hackaday, we love it when someone picks up the ball from a previous project and runs with it. That’s what we’re all about, really — putting out cool projects that just might stimulate someone else to extend and enhance it, or even head off in an entirely new direction. That’s how the state of the art keeps moving.

This DIY spectrometer project is a fantastic example of that ethos. It comes to us from [Michael Prasthofer], who was inspired by [Les Wright]’s PySpectrometer, a simple device cobbled together from a pocket spectroscope and a PiCam. As we noted at the time, [Les] put a lot of the complexity of his instrument in the software, but that doesn’t mean there wasn’t room for improvement.

[Michael]’s goals were to make his spectrometer a little easier to build, and to improve the calibration process and overall accuracy. To help with the former, he went with software correction of the color filter array on his Fuji X-T2. This has the advantage of not requiring a high-power laser and precision micropositioner to ablate the CFA, and avoids potentially destroying an expensive camera. For the latter, [Michael] delved deep into the theory behind spectroscopy and camera optics to develop a process for correlating the intensity of light along the spectrum with the specific wavelength at that location. He also worked a little machine learning into the process, training a network to optimize the response functions.

The result is pretty accurate spectra with no lasers required for calibration. The video below goes into a lot of detail and ends up being a good introduction to some of the basics of spectroscopy, along with the not-so-basics.

Continue reading “An Improved Spectrometer, No Lasers Required”

Tabletop Handybot Is Handy, And Powered By AI

Decently useful AI has been around for a little while now, and robotic arms have been around much longer. Yet somehow, we don’t have little robot helpers on our desks yet! Thankfully, [Yifei] is working towards that reality with Tabletop Handybot.

What [Yifei] has developed is a robotic arm that accepts voice commands. The robot relies on a Realsense D435 RGB-D camera, which provides color vision with depth information as well. Grounding DINO is used for object detection on the RGB images. Segment Anything and Open3D are used for further processing of the visual and depth data to help the robot understand what it’s looking at. Meanwhile, voice commands are interpreted via OpenAI Whisper, which can feed prompts to ChatGPT for further processing.

[Yifei] demonstrates his robot picking up markers on command, which is a pretty cool demo. With so many modern AI tools available, we’re getting closer to the ideal of robots that can understand and execute on general spoken instructions. This is a great example. We may not be all the way there yet, but perhaps soon. Video after the break.

Continue reading “Tabletop Handybot Is Handy, And Powered By AI”

Generative AI Hits The Commodore 64

Image-generating AIs are typically trained on huge arrays of GPUs and require great wads of processing power to run. Meanwhile, [Nick Bild] has managed to get something similar running on a Commodore 64. (via Tom’s Hardware).

A figure generated by [Nick]’s C64. We shall name him… “Sword Guy”!
As you might imagine, [Nick’s] AI image generator isn’t churning out 4K cyberpunk stills dripping in neon. Instead, he aimed at a smaller target, more befitting the Commodore 64 itself. His image generator creates 8×8 game sprites instead.

[Nick’s] model was trained on 100 retro-inspired sprites that he created himself. He did the training phase on a modern computer, so that the Commodore 64 didn’t have to sweat this difficult task on its feeble 6502 CPU. However, it’s more than capable of generating sprites using the model, thanks to some BASIC code that runs off of the training data. Right now, it takes the C64 about 20 minutes to run through 94 iterations to generate a decent sprite.

8×8 sprites are generally simple enough that you don’t need to be an artist to create them. Nonetheless, [Nick] has shown that modern machine learning techniques can be run on slow archaic hardware, even if there is limited utility in doing so. Video after the break.

Continue reading “Generative AI Hits The Commodore 64”

The Perfect Desktop Kit For Experimenting With Self Driving Cars

When we think about self-driving cars, we normally think about big projects measured in billions of dollars, all funded by major automakers. But you can still dive into this world on a smaller scale, as [jmoreno555] demonstrates.

The build consists of a small RC car—an HSP 94123, in fact. It’s got a simple brushed motor inside, driven by a conventional speed controller, and servo-driven steering. A Raspberry Pi 4 is charged with driving the car, but it’s not alone. It’s outfitted with a Google Coral USB stick, which is a machine learning accelerator card capable of 4 trillion operations per second. The car also has a Wemos D1 onboard, charged with interfacing distance sensors to give the car a sense of its environment. Vision is courtesy of a 1.2-megapixel camera with a 160-degree lens, and a stereoscopic camera with twin 75-degree lenses. Software-wise, it’s early days yet. [jmoreno555] is exploring the use of Python and OpenCV to implement basic lane detection and other self driving routines, while using Blender as a simulator.

The real magic idea, though, is the treadmill. [jmoreno555] realized that one of the frustrations of working in this space is in having to chase a car around a test track. Instead, the use of a desktop treadmill allows the car to be programmed and debugged with less fuss in the early stages of development.

If you’re looking for a platform to experiment with AI and self-driving, this could be an project to dive in to. We’ve covered some other great builds in this space, too. Meanwhile, if you’ve cracked driving autonomy and want to let us know, our tipsline is always standing by!

Two assembled 1 dollar TinyML boards

$1 TinyML Board For Your “AI” Sensor Swarm

You might be under the impression that machine learning costs thousands of dollars to work with. That might be true in many cases, but there’s more to machine learning than you might think. For instance, what if you could shower anything with a network of cheap machine-learning-enabled sensors? The 1 dollar TinyML project by [Jon Nordby] allows you to do just that. These tiny boards host an STM32-like MCU, a BLE module, lithium ion power circuitry, and some nice sensor options — an accelerometer, a pair of microphones, and a light sensor.

What could you do with these sensors? [Jon] has talked a bit about a few commercial and non-commercial applications he’s worked on in his ML career, and tells us that the accelerometer alone lets you do human presence detection, sleep tracking, personal activity monitoring, or vibration pattern sensing, for a start. As for the sound input, there’s tasks ranging from gunshot or clapping detection, to coffee roasting process tracking, voice and speech detection, and surely much more. Just a few years ago, we’ve seen machine learning used to comfort a barking dog while its owner is away.

Bottom line is, you ought to get a few of these in your hands and start playing with ML. You still might need a bit of beefier hardware to train your code, but it gets that much easier once you have a network of sensors waiting for your command. Plus, since it’s an open source project, you’ll have a much easier time adding on any additional capabilities your particular application might need.

These boards are pretty cost-optimized, which makes it possible for you to order a couple dozen without breaking the bank. The $1 target is BOM cost, especially if you opt to not include one of the pricier sensors. You can assemble these boards yourself, or get them assembled at a fab of your choice for barely a cost increase. As for software, they will work with the emlearn framework.

Everything is on GitHub — from KiCad sources to Jupyter notebooks. As for Hackaday.io, there are five worklogs of impressive insight — the microphone worklog alone will teach you about microphone amplification in low-power conditions while keeping the cost low. Not as price-constrained and want to try on some image processing tasks? Here’s a beautiful Pi Pico ArduCam board with a camera and a TFT screen.

Hackaday Links Column Banner

Hackaday Links: April 21, 2024

Do humanoid robots dream of electric retirement? Who knows, but maybe we can ask Boston Dynamics’ Atlas HD, which was officially retired this week. The humanoid robot, notable for its warehouse Parkour and sweet dance moves, never went into production, at least not as far as we know. Atlas always seemed like it was intended to be an R&D platform, to see what was possible for a humanoid robot, and in that way it had a heck of a career. But it’s probably a good thing that fleets of Atlas robots aren’t wandering around shop floors or serving drinks, especially given the number of hydraulic blowouts the robot suffered. That also seems to be one of the lessons Boston Dynamics learned, since Atlas’ younger, nimbler replacement is said to be all-electric. From the thumbnail, the new kid already seems pretty scarred and battered, so here’s hoping we get to see some all-electric robot fails soon.

Continue reading “Hackaday Links: April 21, 2024”

Reggaeton-Be-Gone Disconnects Obnoxious Bluetooth Speakers

If you’re currently living outside of a Spanish-speaking country, it’s possible you’ve only heard of the music genre Reggaeton in passing, if at all. In places with large Spanish populations, though, it would be more surprising if you hadn’t heard it. It’s so popular especially in the Carribean and Latin America that it’s gotten on the nerves of some, most notably [Roni] whose neighbor might not do anything else but listen to this style of music, which can be heard through the walls. To solve the problem [Roni] is now introducing the Reggaeton-Be-Gone. (Google Translate from Spanish)

Inspired by the TV-B-Gone devices which purported to be able to turn off annoying TVs in bars, restaurants, and other places, this device can listen to music being played in the surrounding area and identify whether or not it is hearing Reggaeton. It does this using machine learning, taking samples of the audio it hears and making decisions based on a trained model. When the software, running on a Raspberry Pi, makes a positive identification of one of these songs, it looks for Bluetooth devices in the area and attempts to communicate with them in a number of ways, hopefully rapidly enough to disrupt their intended connections.

In testing with [Roni]’s neighbor, the device seems to show promise although it doesn’t completely disconnect the speaker from its host, instead only interfering with it enough for the neighbor to change locations. Clearly it merits further testing, and possibly other models trained for people who use Bluetooth speakers when skiing, hiking, or working out. Eventually the code will be posted to this GitHub page, but until then it’s not the only way to interfere with your neighbor’s annoying stereo.

Thanks to [BaldPower] and [Alfredo] for the tips!