Welcome Your New AI (LEGO) Overlord

You’d think a paper from a science team from Carnegie Mellon would be short on fun. But the team behind LegoGPT would prove you wrong. The system allows you to enter prompt text and produce physically stable LEGO models. They’ve done more than just a paper. You can find a GitHub repo and a running demo, too.

The authors note that the automated generation of 3D shapes has been done. However, incorporating real physics constraints and planning the resulting shape in LEGO-sized chunks is the real topic of interest. The actual project is a set of training data that can transform text to shapes. The real work is done using one of the LLaMA models. The training involved converting Lego designs into tokens, just like a chatbot converts words into tokens.

There are a lot of parts involved in the creation of the designs. They convert meshes to LEGO in one step using 1×1, 1×2, 1×4, 1×6, 1×8, 2×2, 2×4, and 2×6 bricks. Then they evaluate the stability of the design. Finally, they render an image and ask GPT-4o to produce captions to go with the image.

The most interesting example is when they feed robot arms the designs and let them make the resulting design. From text to LEGO with no human intervention! Sounds like something from a bad movie.

We wonder if they added the more advanced LEGO sets, if we could ask for our own Turing machine?

Hackaday Links Column Banner

Hackaday Links: May 11, 2025

Did artificial intelligence just jump the shark? Maybe so, and it came from the legal world of all places, with this report of an AI-generated victim impact statement. In an apparent first, the family of an Arizona man killed in a road rage incident in 2021 used AI to bring the victim back to life to testify during the sentencing phase of his killer’s trial. The video was created by the sister and brother-in-law of the 37-year-old victim using old photos and videos, and was quite well done, despite the normal uncanny valley stuff around lip-syncing that seems to be the fatal flaw for every deep-fake video we’ve seen so far. The victim’s beard is also strangely immobile, which we found off-putting.

Continue reading “Hackaday Links: May 11, 2025”

This Week In Security: Encrypted Messaging, NSO’s Judgement, And AI CVE DDoS

Cryptographic messaging has been in the news a lot recently. Like the formal audit of WhatsApp (the actual PDF). And the results are good. There are some minor potential problems that the audit highlights, but they are of questionable real-world impact. The most consequential is how easy it is to add additional members to a group chat. Or to put it another way, there are no cryptographic guarantees associated with adding a new user to a group.

The good news is that WhatsApp groups don’t allow new members to read previous messages. So a user getting added to a group doesn’t reveal historic messages. But a user added without being noticed can snoop on future messages. There’s an obvious question, as to how this is a weakness. Isn’t it redundant, since anyone with the permission to add someone to a group, can already read the messages from that group?

That’s where the lack of cryptography comes in. To put it simply, the WhatsApp servers could add users to groups, even if none of the existing users actually requested the addition. It’s not a vulnerability per se, but definitely a design choice to keep in mind. Keep an eye on the members in your groups, just in case. Continue reading “This Week In Security: Encrypted Messaging, NSO’s Judgement, And AI CVE DDoS”

AI Brings Play-by-Play Commentary To Pong

While most of us won’t ever play Wimbledon, we can play Pong. But it isn’t the same without the thrill of the sportscaster’s commentary during the game. Thanks to [Parth Parikh] and an LLM, you can now watch Pong matches with commentary during the game. You can see the very cool result in the video below — the game itself starts around the 2:50 mark. Sadly, you don’t get to play. It seems like it wouldn’t be that hard to wire yourself in with a little programming.

The game features multiple AI players and two announcers. There are 15 years of tournaments, including four majors, for a total of 60 events. In the 16th year, the two top players face off in the World Championship Final.

There are several interesting techniques here. For one, each action is logged as an event that generates metrics and is prioritized. If an important game event occurs, commentary pauses to announce that event and then picks back up where it left off.

We really want to see a one- or two-player human version of this. Please tell us if you take on that challenge. Even if you don’t write it, maybe the AI can write it for you.

Continue reading “AI Brings Play-by-Play Commentary To Pong”

Hackaday Links Column Banner

Hackaday Links: April 27, 2025

Looks like the Simpsons had it right again, now that an Australian radio station has been caught using an AI-generated DJ for their midday slot. Station CADA, a Sydney-based broadcaster that’s part of the Australian Radio Network, revealed that “Workdays with Thy” isn’t actually hosted by a person; rather, “Thy” is a generative AI text-to-speech system that has been on the air since November. An actual employee of the ARN finance department was used for Thy’s voice model and her headshot, which adds a bit to the creepy factor.

Continue reading “Hackaday Links: April 27, 2025”

A black and blue swirl background with the logo of a blue dolphin over the word DolphinGemma with dolphin in white and Gemma in blue

DolphinGemma Seeks To Speak To Dolphins

Most people have wished for the ability to talk to other animals at some point, until they realized their cat would mostly insult them and ask for better service, but researchers are getting closer to a dolphin translator.

DolphinGemma is an upcoming LLM based on the recordings from the Wild Dolphin Project. Using the hours and hours of dolphin sounds recorded by researchers over the decades, the hope is that the LLM will allow us to communicate more effectively with the second most intelligent species on the planet.

The LLM is designed to run in the field on Google Pixel phones, due to it being based on Google’s in-house Gemini product, which is a bit less cumbersome than hauling a mainframe on a dive. The Wild Dolphin Project currently uses the Georgia Tech developed CHAT (Cetacean Hearing Augmentation Telemetry) device which has a Pixel 6 at its heart, but the newer system will be bumped up to a Pixel 9 to take advantage of all those shiny new AI processing advances. Hopefully, we’ll have a better chance of catching when they say, “So long and thanks for all the fish.”

If you’re curious about other mysterious languages being deciphered by LLMs, we have you covered.

Continue reading “DolphinGemma Seeks To Speak To Dolphins”

Vibing, AI Style

This week, the hackerverse was full of “vibe coding”. If you’re not caught up on your AI buzzwords, this is the catchy name coined by [Andrej Karpathy] that refers to basically just YOLOing it with AI coding assistants. It’s the AI-fueled version of typing in what you want to StackOverflow and picking the top answers. Only, with the current state of LLMs, it’ll probably work after a while of iterating back and forth with the machine.

It’s a tempting vision, and it probably works for a lot of simple applications, in popular languages, or generally where the ground is already well trodden. And where the stakes are low, as [Al Williams] pointed out while we were talking about vibing on the podcast. Can you imagine vibe-coded ATM software that probably gives you the right amount of money? Vibe-coding automotive ECU software?

While vibe coding seems very liberating and hands-off, it really just changes the burden of doing the coding yourself into making sure that the LLM is giving you what you want, and when it doesn’t, refining your prompts until it does. It’s more like editing and auditing code than authoring it. And while we have no doubt that a stellar programmer like [Karpathy] can verify that he’s getting what he wants, write the correct unit tests, and so on, we’re not sure it’s the panacea that is being proclaimed for folks who don’t already know how to code.

Vibe coding should probably be reserved for people who already are expert coders, and for trivial projects. Just the way you wouldn’t let grade-school kids use calculators until they’ve mastered the basics of math by themselves, you shouldn’t let junior programmers vibe code: It simultaneously demands too much knowledge to corral the LLM, while side-stepping any of the learning that would come from doing it yourself.

And then there’s the security side of vibe coding, which opens up a whole attack surface. If the LLM isn’t up to industry standards on simple things like input sanitization, your vibed code probably shouldn’t be anywhere near the Internet.

So should you be vibing? Sure! If you feel competent overseeing what [Dan] described as “the worst summer intern ever”, and the states are low, then it’s absolutely a fun way to kick the tires and see what the tools are capable of. Just go into it all with reasonable expectations.