An Awful 1990s PDA Delivers AI Wisdom

There was a period in the 1990s when it seemed like the personal data assistant (PDA) was going to be the device of the future. If you were lucky you could afford a Psion, a PalmPilot, or even the famous Apple Newton — but to trap the unwary there were a slew of far less capable machines competing for market share.

[Nick Bild] has one of these, branded Rolodex, and in a bid to make using a generative AI less alluring, he’s set it up as the interface to an LLM hosted on a Raspberry Pi 400. This hack is thus mostly a tale of reverse engineering the device’s serial protocol to free it from its Windows application.

Finding the baud rate was simple enough, but the encoding scheme was unexpectedly fiddly. Sadly the device doesn’t come with a terminal because these machines were very much single-purpose, but it does have a memo app that allows transfer of text files. This is the wildly inefficient medium through which the communication with the LLM happens, and it satisfies the requirement of making the process painful.

We see this type of PDA quite regularly in second hand shops, indeed you’ll find nearly identical devices from multiple manufacturers also sporting software such as dictionaries or a thesaurus. Back in the day they always seemed to be advertised in Sunday newspapers and aimed at older people. We’ve never got to the bottom of who the OEM was who manufactured them, or indeed cracked one apart to find the inevitable black epoxy blob processor. If we had to place a bet though, we’d guess there’s an 8051 core in there somewhere.

Continue reading “An Awful 1990s PDA Delivers AI Wisdom”

Christmas Comes Early With AI Santa Demo

With only two hundred odd days ’til Christmas, you just know we’re already feeling the season’s magic. Well, maybe not, but [Sean Dubois] has decided to give us a head start with this WebRTC demo built into a Santa stuffie.

The details are a little bit sparse (hopefully he finishes the documentation on GitHub by the time this goes out) but the project is really neat. Hardware-wise, it’s an audio-enabled ESP32-S3 dev board living inside Santa, running the OpenAI’s OpenRealtime Embedded SDK (as implemented by ExpressIf), with some customization by [Sean]. Looks like the audio is going through the newest version of LibPeer and the heavy lifting is all happening in the cloud, as you’d expect with this SDK. (A key is required, but hey! It’s all open source; if you have an AI that can do the job locally-hosted, you can probably figure out how to connect to it instead.)

This speech-to-speech AI doesn’t need to emulate Santa Claus, of course; you can prime the AI with any instructions you’d like. If you want to delight children, though, its hard to beat the Jolly Old Elf, and you certainly have time to get it ready for Christmas. Thanks to [Sean] for sending in the tip.

If you like this project but want to avoid paying OpenAI API fees, here’s a speech-to-text model to get you started.We covered this AI speech generator last year to handle the talky bit. If you put them together and make your own Santa Claus (or perhaps something more seasonal to this time of year), don’t forget to drop us a tip!

MCP Blender Addon Lets AI Take The Wheel And Wield The Tools

Want to give an AI the ability to do stuff in Blender? The BlenderMCP addon does exactly that, connecting open-source 3D modeling software Blender to Anthropic’s Claude AI via MCP (Model Context Protocol), which means Claude can directly use Blender and its tools in a meaningful way.

MCP is a framework for allowing AI systems like LLMs (Large Language Models) to exchange information in a way that makes it easier to interface with other systems. We’ve seen LLMs tied experimentally into other software (such as with enabling more natural conversations with NPCs) but without a framework like MCP, such exchanges are bespoke and effectively stateless. MCP becomes very useful for letting LLMs use software tools and perform work that involves an iterative approach, better preserving the history and context of the task at hand.

Unlike the beach scene above which used 3D assets, this scene was created from scratch with the help of a reference image.

Using MCP also provides some standardization, which means that while the BlenderMCP project integrates with Claude (or alternately the Cursor AI editor) it could — with the right configuration — be pointed at a suitable locally-hosted LLM instead. It wouldn’t be as capable as the commercial offerings, but it would be entirely private.

Embedded below are three videos that really show what this tool can do. In the first, watch it create a beach scene using assets from a public 3D asset library. In the second, it creates a scene from scratch using a reference image (a ‘low-poly cabin in the woods’), followed by turning that same scene into a 3D environment on a web page, navigable in any web browser.

Back in 2022 we saw Blender connected to an image generator to texture objects, but this is considerably more capable. It’s a fascinating combination, and if you’re thinking of trying it out just make sure you’re aware it relies on allowing arbitrary Python code to be run in Blender, which is powerful but should be deployed with caution.

Continue reading “MCP Blender Addon Lets AI Take The Wheel And Wield The Tools”

Welcome Your New AI (LEGO) Overlord

You’d think a paper from a science team from Carnegie Mellon would be short on fun. But the team behind LegoGPT would prove you wrong. The system allows you to enter prompt text and produce physically stable LEGO models. They’ve done more than just a paper. You can find a GitHub repo and a running demo, too.

The authors note that the automated generation of 3D shapes has been done. However, incorporating real physics constraints and planning the resulting shape in LEGO-sized chunks is the real topic of interest. The actual project is a set of training data that can transform text to shapes. The real work is done using one of the LLaMA models. The training involved converting Lego designs into tokens, just like a chatbot converts words into tokens.

There are a lot of parts involved in the creation of the designs. They convert meshes to LEGO in one step using 1×1, 1×2, 1×4, 1×6, 1×8, 2×2, 2×4, and 2×6 bricks. Then they evaluate the stability of the design. Finally, they render an image and ask GPT-4o to produce captions to go with the image.

The most interesting example is when they feed robot arms the designs and let them make the resulting design. From text to LEGO with no human intervention! Sounds like something from a bad movie.

We wonder if they added the more advanced LEGO sets, if we could ask for our own Turing machine?

An LLM For The Raspberry Pi

Microsoft’s latest Phi4 LLM has 14 billion parameters that require about 11 GB of storage. Can you run it on a Raspberry Pi? Get serious. However, the Phi4-mini-reasoning model is a cut-down version with “only” 3.8 billion parameters that requires 3.2 GB. That’s more realistic and, in a recent video, [Gary Explains] tells you how to add this LLM to your Raspberry Pi arsenal.

The version [Gary] uses has four-bit quantization and, as you might expect, the performance isn’t going to be stellar. If you are versed in all the LLM lingo, the quantization is the way weights are stored, and, in general, the more parameters a model uses, the more things it can figure out.

Continue reading “An LLM For The Raspberry Pi”

AI Brings Play-by-Play Commentary To Pong

While most of us won’t ever play Wimbledon, we can play Pong. But it isn’t the same without the thrill of the sportscaster’s commentary during the game. Thanks to [Parth Parikh] and an LLM, you can now watch Pong matches with commentary during the game. You can see the very cool result in the video below — the game itself starts around the 2:50 mark. Sadly, you don’t get to play. It seems like it wouldn’t be that hard to wire yourself in with a little programming.

The game features multiple AI players and two announcers. There are 15 years of tournaments, including four majors, for a total of 60 events. In the 16th year, the two top players face off in the World Championship Final.

There are several interesting techniques here. For one, each action is logged as an event that generates metrics and is prioritized. If an important game event occurs, commentary pauses to announce that event and then picks back up where it left off.

We really want to see a one- or two-player human version of this. Please tell us if you take on that challenge. Even if you don’t write it, maybe the AI can write it for you.

Continue reading “AI Brings Play-by-Play Commentary To Pong”

Comparing ‘AI’ For Basic Plant Care With Human Brown Thumbs

The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
Like so many of us, [Liam] has a big problem. Whether it’s the curse of Brown Thumbs or something else, those darn houseplants just keep dying despite guides always telling you how incredibly easy it is to keep them from wilting with a modicum of care each day, even without opting for succulents or cactuses. In a fit of despair [Liam] decided to pin his hopes on what we have come to accept as the Savior of Humankind, namely ‘AI’, which can stand for a lot of things, but it’s definitely really smart and can even generate pretty pictures, which is something that the average human can not. Hence it’s time to let an LLM do all the smart plant caring stuff with ‘PlantMom’.

Since LLMs so far don’t come with physical appendages by default, some hardware had to be plugged together to measure parameters like light, temperature and soil moisture. Add to this a grow light and a water pump and all that remained was to tell the LMM using an extensive prompt, containing Python code, what it should do (keep the the plant alive), and what Python methods are available. All that was left now was to let the Google’s Gemma 3 handle it.

To say that this resulted in a dramatic failure along with what reads like an emotional breakdown on the part of the LLM would be an understatement. The LLM insisted on turning the grow light on when it should be off and had the most erratic watering responses imaginable based on absolutely incorrect interpretations of the ADC data, flipping dry and wet. After this episode the poor chili plant’s soil was absolutely saturated and is still trying to dry out, while the ongoing LLM experiment, with an empty water tank, has the grow light blasting more often than a weed farm.

So far it seems like that the humble state machine’s job is still safe from being taken over by ‘AI’, and not even brown thumb folk can kill plants this efficiently.