MCP Blender Addon Lets AI Take The Wheel And Wield The Tools

Want to give an AI the ability to do stuff in Blender? The BlenderMCP addon does exactly that, connecting open-source 3D modeling software Blender to Anthropic’s Claude AI via MCP (Model Context Protocol), which means Claude can directly use Blender and its tools in a meaningful way.

MCP is a framework for allowing AI systems like LLMs (Large Language Models) to exchange information in a way that makes it easier to interface with other systems. We’ve seen LLMs tied experimentally into other software (such as with enabling more natural conversations with NPCs) but without a framework like MCP, such exchanges are bespoke and effectively stateless. MCP becomes very useful for letting LLMs use software tools and perform work that involves an iterative approach, better preserving the history and context of the task at hand.

Unlike the beach scene above which used 3D assets, this scene was created from scratch with the help of a reference image.

Using MCP also provides some standardization, which means that while the BlenderMCP project integrates with Claude (or alternately the Cursor AI editor) it could — with the right configuration — be pointed at a suitable locally-hosted LLM instead. It wouldn’t be as capable as the commercial offerings, but it would be entirely private.

Embedded below are three videos that really show what this tool can do. In the first, watch it create a beach scene using assets from a public 3D asset library. In the second, it creates a scene from scratch using a reference image (a ‘low-poly cabin in the woods’), followed by turning that same scene into a 3D environment on a web page, navigable in any web browser.

Back in 2022 we saw Blender connected to an image generator to texture objects, but this is considerably more capable. It’s a fascinating combination, and if you’re thinking of trying it out just make sure you’re aware it relies on allowing arbitrary Python code to be run in Blender, which is powerful but should be deployed with caution.

Continue reading “MCP Blender Addon Lets AI Take The Wheel And Wield The Tools”

Welcome Your New AI (LEGO) Overlord

You’d think a paper from a science team from Carnegie Mellon would be short on fun. But the team behind LegoGPT would prove you wrong. The system allows you to enter prompt text and produce physically stable LEGO models. They’ve done more than just a paper. You can find a GitHub repo and a running demo, too.

The authors note that the automated generation of 3D shapes has been done. However, incorporating real physics constraints and planning the resulting shape in LEGO-sized chunks is the real topic of interest. The actual project is a set of training data that can transform text to shapes. The real work is done using one of the LLaMA models. The training involved converting Lego designs into tokens, just like a chatbot converts words into tokens.

There are a lot of parts involved in the creation of the designs. They convert meshes to LEGO in one step using 1×1, 1×2, 1×4, 1×6, 1×8, 2×2, 2×4, and 2×6 bricks. Then they evaluate the stability of the design. Finally, they render an image and ask GPT-4o to produce captions to go with the image.

The most interesting example is when they feed robot arms the designs and let them make the resulting design. From text to LEGO with no human intervention! Sounds like something from a bad movie.

We wonder if they added the more advanced LEGO sets, if we could ask for our own Turing machine?

An LLM For The Raspberry Pi

Microsoft’s latest Phi4 LLM has 14 billion parameters that require about 11 GB of storage. Can you run it on a Raspberry Pi? Get serious. However, the Phi4-mini-reasoning model is a cut-down version with “only” 3.8 billion parameters that requires 3.2 GB. That’s more realistic and, in a recent video, [Gary Explains] tells you how to add this LLM to your Raspberry Pi arsenal.

The version [Gary] uses has four-bit quantization and, as you might expect, the performance isn’t going to be stellar. If you are versed in all the LLM lingo, the quantization is the way weights are stored, and, in general, the more parameters a model uses, the more things it can figure out.

Continue reading “An LLM For The Raspberry Pi”

AI Brings Play-by-Play Commentary To Pong

While most of us won’t ever play Wimbledon, we can play Pong. But it isn’t the same without the thrill of the sportscaster’s commentary during the game. Thanks to [Parth Parikh] and an LLM, you can now watch Pong matches with commentary during the game. You can see the very cool result in the video below — the game itself starts around the 2:50 mark. Sadly, you don’t get to play. It seems like it wouldn’t be that hard to wire yourself in with a little programming.

The game features multiple AI players and two announcers. There are 15 years of tournaments, including four majors, for a total of 60 events. In the 16th year, the two top players face off in the World Championship Final.

There are several interesting techniques here. For one, each action is logged as an event that generates metrics and is prioritized. If an important game event occurs, commentary pauses to announce that event and then picks back up where it left off.

We really want to see a one- or two-player human version of this. Please tell us if you take on that challenge. Even if you don’t write it, maybe the AI can write it for you.

Continue reading “AI Brings Play-by-Play Commentary To Pong”

Comparing ‘AI’ For Basic Plant Care With Human Brown Thumbs

The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
Like so many of us, [Liam] has a big problem. Whether it’s the curse of Brown Thumbs or something else, those darn houseplants just keep dying despite guides always telling you how incredibly easy it is to keep them from wilting with a modicum of care each day, even without opting for succulents or cactuses. In a fit of despair [Liam] decided to pin his hopes on what we have come to accept as the Savior of Humankind, namely ‘AI’, which can stand for a lot of things, but it’s definitely really smart and can even generate pretty pictures, which is something that the average human can not. Hence it’s time to let an LLM do all the smart plant caring stuff with ‘PlantMom’.

Since LLMs so far don’t come with physical appendages by default, some hardware had to be plugged together to measure parameters like light, temperature and soil moisture. Add to this a grow light and a water pump and all that remained was to tell the LMM using an extensive prompt, containing Python code, what it should do (keep the the plant alive), and what Python methods are available. All that was left now was to let the Google’s Gemma 3 handle it.

To say that this resulted in a dramatic failure along with what reads like an emotional breakdown on the part of the LLM would be an understatement. The LLM insisted on turning the grow light on when it should be off and had the most erratic watering responses imaginable based on absolutely incorrect interpretations of the ADC data, flipping dry and wet. After this episode the poor chili plant’s soil was absolutely saturated and is still trying to dry out, while the ongoing LLM experiment, with an empty water tank, has the grow light blasting more often than a weed farm.

So far it seems like that the humble state machine’s job is still safe from being taken over by ‘AI’, and not even brown thumb folk can kill plants this efficiently.

An illustration of two translucent blue hands knitting a DNA double helix of yellow, green, and red base pairs from three colors of yarn. Text in white to the left of the hands reads: "Evo 2 doesn't just copy existing DNA -- it creates truly new sequences not found in nature that scientists can test for useful properties."

LLMs Coming For A DNA Sequence Near You

While tools like CRISPR have blown the field of genome hacking wide open, being able to predict what will happen when you tinker with the code underlying the living things on our planet is still tricky. Researchers at Stanford hope their new Evo 2 DNA generative AI tool can help.

Trained on a dataset of over 100,000 organisms from bacteria to humans, the system can quickly determine what mutations contribute to certain diseases and what mutations are mostly harmless. An “area we are hopeful about is using Evo 2 for designing new genetic sequences with specific functions of interest.”

To that end, the system can also generate gene sequences from a starting prompt like any other LLM as well as cross-reference the results to see if the sequence already occurs in nature to aid in predicting what the sequence might do in real life. These synthetic sequences can then be made using CRISPR or similar techniques in the lab for testing. While the prospect of building our own Moya is exciting, we do wonder what possible negative consequences could come from this technology, despite the hand-wavy mention of not training the model on viruses to “to prevent Evo 2 from being used to create new or more dangerous diseases.”

We’ve got you covered if you need to get your own biohacking space setup for DNA gels or if you want to find out more about powering living computers using electricity. If you’re more curious about other interesting uses for machine learning, how about a dolphin translator or discovering better battery materials?

A black and blue swirl background with the logo of a blue dolphin over the word DolphinGemma with dolphin in white and Gemma in blue

DolphinGemma Seeks To Speak To Dolphins

Most people have wished for the ability to talk to other animals at some point, until they realized their cat would mostly insult them and ask for better service, but researchers are getting closer to a dolphin translator.

DolphinGemma is an upcoming LLM based on the recordings from the Wild Dolphin Project. Using the hours and hours of dolphin sounds recorded by researchers over the decades, the hope is that the LLM will allow us to communicate more effectively with the second most intelligent species on the planet.

The LLM is designed to run in the field on Google Pixel phones, due to it being based on Google’s in-house Gemini product, which is a bit less cumbersome than hauling a mainframe on a dive. The Wild Dolphin Project currently uses the Georgia Tech developed CHAT (Cetacean Hearing Augmentation Telemetry) device which has a Pixel 6 at its heart, but the newer system will be bumped up to a Pixel 9 to take advantage of all those shiny new AI processing advances. Hopefully, we’ll have a better chance of catching when they say, “So long and thanks for all the fish.”

If you’re curious about other mysterious languages being deciphered by LLMs, we have you covered.

Continue reading “DolphinGemma Seeks To Speak To Dolphins”