Building Penny’s Computer Watch From Inspector Gadget

When you help your bumbling Uncle Gadget with all kinds of missions, you definitely need a watch that can do it all. Penny’s video watch from Inspector Gadget has a ton of features including video communication with Brain and Chief Quimby, a laser, a magnet, a flashlight, a sonar signal, and much more.

To round out her Penny costume, [Becky Stern] has created a 3D printed version of Penny’s incredibly smart watch. It listens for Penny’s iconic phrase — come in, Brain! — and then loads a new picture of Brain on the rounded rectangle TFT display. Inside the watch is an Arduino Nicla Voice, which has to be one of the tinier machine learning-capable boards out there.

[Becky] created the watch case in Tinkercad and modified a watch band from Printables to fit her wrist. With such a small enclosure to work with, [Becky] ended up using that really flexible 30 AWG silicone-jacketed wire for all the fiddly connections between the Arduino and the screen.

After getting it all wired up to test, she found that the screen was broken, either from pressing it into the enclosure, or having a too-close encounter with a helping hands. Let that be a lesson to you, and check out the build video after the break.

More interested in Uncle Gadget’s goodies? Check out these go-go-Gadget shoes and this propeller backpack for skiers.

Continue reading “Building Penny’s Computer Watch From Inspector Gadget

AI In A Box Envisions AI As A Private, Offline, Hackable Module

[Useful Sensors] aims to embed a variety of complementary AI tools into a small, private, self-contained module with no internet connection with AI in a Box. It can do live voice recognition and captioning, live translation, and natural language conversational interaction with a local large language model (LLM). Intriguingly, it’s specifically designed with features to make it hack-friendly, such as the ability to act as a voice keyboard by sending live transcribed audio as keystrokes over USB.

Based on the RockChip 3588S SoC, the unit aims to have an integrated speaker, display, and microphone.

Right now it’s wrapping up a pre-order phase, and aims to ship units around the end of January 2024. The project is based around the RockChip 3588S SoC and is open source (GitHub repository), but since it’s still in development, there’s not a whole lot visible in the repository yet. However, a key part of getting good performance is [Useful Sensors]’s own transformers library for the RockChip NPU (neural processing unit).

The ability to perform things like high quality local voice recognition and run locally-hosted LLMs like LLaMa have gotten a massive boost thanks to recent advances in machine learning, and it looks like this project aims to tie them together in a self-contained package.

Perhaps private digital assistants can become more useful when users can have the freedom to modify and integrate them as they see fit. Digital assistants hosted by the big tech companies are often frustrating, and others have observed that this is ultimately because they primarily exist to serve their makers more than they help users.

Continue reading AI In A Box Envisions AI As A Private, Offline, Hackable Module”

Full Self-Driving, On A Budget

Self-driving is currently the Holy Grail in the automotive world, with a number of companies racing to build general-purpose autonomous vehicles that can get from point A to point B with no user input. While no one has brought one to market yet, at least one has promised this feature and had customers pay for it, but continually moved the goalposts for delivery due to how challenging this problem turns out to be. But it doesn’t need to be that hard or expensive to solve, at least in some situations.

The situation in question is driving on a single stretch of highway, and only focuses on steering, so it doesn’t handle the accelerator or brake pedal input. The highway is driven normally, using a webcam to take images of the route and an Arduino to capture data about the steering angle. The idea here is that with enough training the Arduino could eventually steer the car. But first some math needs to happen on the training data since the steering wheel is almost always not turning the car, so the Arduino knows that actual steering events aren’t just statistical anomalies. After the training, the system does a surprisingly good job at “driving” based on this data, and does it on a budget not much larger than laptop, microcontroller, and webcam.

Admittedly, this project was a proof-of-concept to investigate machine learning, neural networks, and other statistical algorithms used in these sorts of systems, and doesn’t actually drive any cars on any roadways. Even the creator says he wouldn’t trust it himself, but that he was pleasantly surprised by the results of such a simple system. It could also be expanded out to handle brake and accelerator pedals with separate neural networks as well. It’s not our first budget-friendly self-driving system, either. This one makes it happen with the enormous computing resources of a single Android smartphone.

Continue reading “Full Self-Driving, On A Budget”

Compact, Gesture-Based Remote Control Over Bluetooth

[AlexMiller11] shared a project for a DIY gesture-sensing remote control that acts like a Bluetooth keyboard, capable of controlling media and presentations on a computer with a high degree of accuracy.

The device recognizes eight different gestures and controls a host PC over Bluetooth.

The hardware is a Silicon Labs xG24 dev kit, a small IoT-focused board able to be powered by a CR2032 cell. Part of what makes it all work is the six-axis IMU sensor, but the rest is the software to interpret that data and figure out what motions the user is trying to do. That happens with a Neuton.AI model and SDK, a tiny but effective machine learning framework for small devices.

How does it actually work? The device acts as a Bluetooth HID, and gets connected to a PC in the same was as a regular Bluetooth keyboard. Once that’s done, recognized gestures are printed out the serial port as well as sent via Bluetooth to the host machine. Media can then be played, paused, volume adjusted, presentations controlled, and more. More details are on the project’s GitHub repository. There’s also a demo video that explains exactly what’s going on, embedded below the page break.

Machine learning is a way of using software to solve the kinds of problems humans are not very good at writing programs to solve, and accurate gesture recognition is a good example. Not all such applications require heaps of overheating GPUs, either. We’ve seen the concept of a neural network stripped down to its bare essentials running on an Arduino Uno, for those who would like to better appreciate the fundamentals.

Continue reading “Compact, Gesture-Based Remote Control Over Bluetooth”

Helping Robots Learn By Letting Them Fail

The [MIT Technology Review] has just released its annual list of the top innovators under the age of 35, and there are some interesting people on this list of the annoyingly accomplished at a young age. Like [Lerrel Pinto], an associate professor of computer science at NY University. His work focuses on teaching robots how to do things in the home by failing.

Continue reading “Helping Robots Learn By Letting Them Fail”

Machine Learning Robot Runs Arduino Uno

When we think about machine learning, our minds often jump to datacenters full of sweating, overheating GPUs. However, lighter-weight hardware can also be used to these ends, as demonstrated by [Nikodem Bartnik] and his latest robot.

The robot is charged with autonomously navigating a simple racetrack delineated by cardboard barriers. The robot is based on a two-wheeled design with tank-style steering. Controlled by an Arduino Uno, the robot uses a Slamtec RPLIDAR sensor to help map out its surroundings. The microcontroller is also armed with a Bluetooth link and an SD card for storage.

The robot was first driven around the racetrack multiple times under manual control, all the while collecting LIDAR data. This data was combined with control inputs to help create a data set that could be used to train a machine learning model. Feature selection techniques were used to refine down the data points collected to those most relevant to completing the driving task. [Nikodem] explains how the model was created and then refined to drive the robot by itself in a variety of race track designs.

It’s a great primer on machine learning techniques applied to a small embedded platform.

Continue reading “Machine Learning Robot Runs Arduino Uno”

Here’s Why GPUs Are Deep Learning’s Best Friend

If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to read [Tim Dettmers] explain why this is so. It’s not a terribly long read, but while it does get technical there are also car analogies, so there’s something for everyone!

He starts off by saying that most people know that GPUs are scarily efficient at matrix multiplication and convolution, but what really makes them most useful is their ability to work with large amounts of memory very efficiently.

Essentially, a CPU is a latency-optimized device while GPUs are bandwidth-optimized devices. If a CPU is a race car, a GPU is a cargo truck. The main job in deep learning is to fetch and move cargo (memory, actually) around. Both devices can do this job, but in different ways. A race car moves quickly, but can’t carry much. A truck is slower, but far better at moving a lot at once. Continue reading “Here’s Why GPUs Are Deep Learning’s Best Friend”