Hacked teddybear on a desk

Turning GLaDOS Into Ted: A Tale Of A Talking Toy

What if your old, neglected toys could come to life — with a bit of sass? That’s exactly what [Binh] achieved when he transformed his sister’s worn-out teddy bear into ‘Ted’, an interactive talking plush with a personality of its own. This project, which combines the GLaDOS Personality Core project from the Portal series with clever microcontroller tinkering, brings a whole new personality to a childhood favorite.

[Binh] started with the basics: a teddy bear already equipped with buttons and speakers, which he overhauled with an ESP32 microcontroller. The bear’s personality originated from GLaDOS, but was rewritten by [Binh] to fit a cheeky, teddy-bear tone. With a few tweaks in the Python-based fork, [Binh] created threads to handle touch-based interaction. For example, the ESP32 detects where the bear is touched and sends this input to a modified neural network, which then generates a response. The bear can, for instance, call you out for holding his paw for too long or sarcastically plead for mercy. I hear you say ‘but that bear Ted could do a lot more!’ Well — maybe, all this is just what an innocent bear with a personality should be capable of.

Instead, let us imagine future iterations featuring capacitive touch sensors or accelerometers to detect movement. The project is simple, but showcases the potential for intelligent plush toys. It might raise some questions, too.

Continue reading “Turning GLaDOS Into Ted: A Tale Of A Talking Toy”

Modern AI On Vintage Hardware: LLama 2 Runs On Windows 98

[EXO Labs] demonstrated something pretty striking: a modified version of Llama 2 (a large language model) that runs on Windows 98. Why? Because when it comes to personal computing, if something can run on Windows 98, it can run on anything. More to the point: if something can run on Windows 98 then it’s something no tech company can control how you use, no matter how large or influential they may be. More on that in a minute.

Ever wanted to run a local LLM on 25 year old hardware? No? Well now you can, and at a respectable speed, too!

What’s it like to run an LLM on Windows 98? Aside from the struggles of things like finding compatible peripherals (back to PS/2 hardware!) and transferring the required files (FTP over Ethernet to the rescue) or even compilation (some porting required), it works maybe better than one might expect.

A Windows 98 machine with Pentium II processor and 128 MB of RAM generates a speedy 39.31 tokens per second with a 260K parameter Llama 2 model. A much larger 15M model generates 1.03 tokens per second. Slow, but it works. Going even larger will also work, just ever slower. There’s a video on X that shows it all in action.

It’s true that modern LLMs have billions of parameters so these models are tiny in comparison. But that doesn’t mean they can’t be useful. Models can be shockingly small and still be perfectly coherent and deliver surprisingly strong performance if their training and “job” is narrow enough, and the tools to do that for oneself are all on GitHub.

This is a good time to mention that this particular project (and its ongoing efforts) are part of a set of twelve projects by EXO Labs focusing on ensuring things like AI models can be run anywhere, by anyone, independent of tech giants aiming to hold all the strings.

And hey, if local AI and the command line is something that’s up your alley, did you know they already exist as single-file, multi-platform, command-line executables?

Hackaday Links Column Banner

Hackaday Links: January 5, 2025

Good news this week from the Sun’s far side as the Parker Solar Probe checked in after its speedrun through our star’s corona. Parker became the fastest human-made object ever — aside from the manhole cover, of course — as it fell into the Sun’s gravity well on Christmas Eve to pass within 6.1 million kilometers of the surface, in an attempt to study the extremely dynamic environment of the solar atmosphere. Similar to how manned spacecraft returning to Earth are blacked out from radio communications, the plasma soup Parker flew through meant everything it would do during the pass had to be autonomous, and we wouldn’t know how it went until the probe cleared the high-energy zone. The probe pinged Earth with a quick “I’m OK” message on December 26, and checked in with the Deep Space Network as scheduled on January 1, dumping telemetry data that indicated the spacecraft not only survived its brush with the corona but that every instrument performed as expected during the pass. The scientific data from the instruments won’t be downloaded until the probe is in a little better position, and then Parker will get to do the whole thing again twice more in 2025. Continue reading “Hackaday Links: January 5, 2025”

Hackaday Links Column Banner

Hackaday Links: December 15, 2024

It looks like we won’t have Cruise to kick around in this space anymore with the news that General Motors is pulling the plug on its woe-beset robotaxi project. Cruise, which GM acquired in 2016, fielded autonomous vehicles in various test markets, but the fleet racked up enough high-profile mishaps (first item) for California regulators to shut down test programs in the state last year. The inevitable layoffs ensued, and GM is now killing off its efforts to build robotaxis to concentrate on incorporating the Cruise technology into its “Super Cruise” suite of driver-assistance features for its full line of cars and trucks. We feel like this might be a tacit admission that surmounting the problems of fully autonomous driving is just too hard a nut to crack profitably with current technology, since Super Cruise uses eye-tracking cameras to make sure the driver is paying attention to the road ahead when automation features are engaged. Basically, GM is admitting there still needs to be meat in the seat, at least for now.

Continue reading “Hackaday Links: December 15, 2024”

Render of life-size robot rat animatronic on blue plane

Robot Rodents: How AI Learned To Squeak And Play

In an astonishing blend of robotics and nature, SMEO—a robot rat designed by researchers in China and Germany — is fooling real rats into treating it like one of their own.

What sets SMEO apart is its rat-like adaptability. Equipped with a flexible spine, realistic forelimbs, and AI-driven behavior patterns, it doesn’t just mimic a rat — it learns and evolves through interaction. Researchers used video data to train SMEO to “think” like a rat, convincing its living counterparts to play, cower, or even engage in social nuzzling. This degree of mimicry could make SMEO a valuable tool for studying animal behavior ethically, minimizing stress on live animals by replacing some real-world interactions.

For builders and robotics enthusiasts, SMEO is a reminder that robotics can push boundaries while fostering a more compassionate future. Many have reservations about keeping intelligent creatures in confined cages or using them in experiments, so imagine applying this tech to non-invasive studies or even wildlife conservation. In a world where robotic dogs, bees, and even schools of fish have come to life, this animatronic rat sounds like an addition worth further exploring. SMEO’s development could, ironically, pave the way for reducing reliance on animal testing.

Continue reading “Robot Rodents: How AI Learned To Squeak And Play”

The Junk Machine Prints Corrupted Advertising On Demand

[ClownVamp]’s art project The Junk Machine is an interactive and eye-catching machine that, on demand, prints out an equally eye-catching and unique yet completely meaningless (one may even say corrupted) AI-generated advertisement for nothing in particular.

The machine is an artistic statement on how powerful software tools that have genuine promise and usefulness to creative types are finding their way into marketer’s hands, and resulting in a deluge of, well, junk. This machine simplifies and magnifies that in a physical way.

We can’t help but think that The Junk Machine is in a way highlighting Sturgeon’s Law (paraphrased as ‘ninety percent of everything is crud’) which happens to be particularly applicable to the current AI landscape. In short, the ease of use of these tools means that crud is also being effortlessly generated at an unprecedented scale, swamping any positive elements.

As for the hardware and software, we’re very interested in what’s inside. Unfortunately there’s no deep technical details, but the broad strokes are that The Junk Machine uses an embedded NVIDIA Jetson loaded up with Stable Diffusion’s SDXL Turbo, an open source AI image generator that can be installed and run locally. When and if a user mashes a large red button, the machine generates a piece of AI junk mail in real time without any need for a network connection of any kind, and prints it from an embedded printer.

Watch it in action in the video embedded below, just under the page break. There are a few more different photos on [ClownVamp]’s X account.

Continue reading The Junk Machine Prints Corrupted Advertising On Demand”

An Animated Walkthrough Of How Large Language Models Work

If you wonder how Large Language Models (LLMs) work and aren’t afraid of getting a bit technical, don’t miss [Brendan Bycroft]’s LLM Visualization. It is an interactively-animated step-by-step walk-through of a GPT large language model complete with animated and interactive 3D block diagram of everything going on under the hood. Check it out!

nano-gpt has only around 85,000 parameters, but the operating principles are all the same as for larger models.

The demonstration walks through a simple task and shows every step. The task is this: using the nano-gpt model, take a sequence of six letters and put them into alphabetical order.

A GPT model is a highly complex prediction engine, so the whole process begins with tokenizing the input (breaking up words and assigning numerical values to the chunks) and ends with choosing an appropriate output from a list of probabilities. There are of course many more steps in between, and different ways to adjust the model’s behavior. All of these are made quite clear by [Brendan]’s process breakdown.

We’ve previously covered how LLMs work, explained without math which eschews gritty technical details in favor of focusing on functionality, but it’s also nice to see an approach like this one, which embraces the technical elements of exactly what is going on.

We’ve also seen a much higher-level peek at how a modern AI model like Anthropic’s Claude works when it processes requests, extracting human-understandable concepts that illustrate what’s going on under the hood.