Hackaday Links Column Banner

Hackaday Links: April 27, 2025

Looks like the Simpsons had it right again, now that an Australian radio station has been caught using an AI-generated DJ for their midday slot. Station CADA, a Sydney-based broadcaster that’s part of the Australian Radio Network, revealed that “Workdays with Thy” isn’t actually hosted by a person; rather, “Thy” is a generative AI text-to-speech system that has been on the air since November. An actual employee of the ARN finance department was used for Thy’s voice model and her headshot, which adds a bit to the creepy factor.

Continue reading “Hackaday Links: April 27, 2025”

An illustration of two translucent blue hands knitting a DNA double helix of yellow, green, and red base pairs from three colors of yarn. Text in white to the left of the hands reads: "Evo 2 doesn't just copy existing DNA -- it creates truly new sequences not found in nature that scientists can test for useful properties."

LLMs Coming For A DNA Sequence Near You

While tools like CRISPR have blown the field of genome hacking wide open, being able to predict what will happen when you tinker with the code underlying the living things on our planet is still tricky. Researchers at Stanford hope their new Evo 2 DNA generative AI tool can help.

Trained on a dataset of over 100,000 organisms from bacteria to humans, the system can quickly determine what mutations contribute to certain diseases and what mutations are mostly harmless. An “area we are hopeful about is using Evo 2 for designing new genetic sequences with specific functions of interest.”

To that end, the system can also generate gene sequences from a starting prompt like any other LLM as well as cross-reference the results to see if the sequence already occurs in nature to aid in predicting what the sequence might do in real life. These synthetic sequences can then be made using CRISPR or similar techniques in the lab for testing. While the prospect of building our own Moya is exciting, we do wonder what possible negative consequences could come from this technology, despite the hand-wavy mention of not training the model on viruses to “to prevent Evo 2 from being used to create new or more dangerous diseases.”

We’ve got you covered if you need to get your own biohacking space setup for DNA gels or if you want to find out more about powering living computers using electricity. If you’re more curious about other interesting uses for machine learning, how about a dolphin translator or discovering better battery materials?

A black and blue swirl background with the logo of a blue dolphin over the word DolphinGemma with dolphin in white and Gemma in blue

DolphinGemma Seeks To Speak To Dolphins

Most people have wished for the ability to talk to other animals at some point, until they realized their cat would mostly insult them and ask for better service, but researchers are getting closer to a dolphin translator.

DolphinGemma is an upcoming LLM based on the recordings from the Wild Dolphin Project. Using the hours and hours of dolphin sounds recorded by researchers over the decades, the hope is that the LLM will allow us to communicate more effectively with the second most intelligent species on the planet.

The LLM is designed to run in the field on Google Pixel phones, due to it being based on Google’s in-house Gemini product, which is a bit less cumbersome than hauling a mainframe on a dive. The Wild Dolphin Project currently uses the Georgia Tech developed CHAT (Cetacean Hearing Augmentation Telemetry) device which has a Pixel 6 at its heart, but the newer system will be bumped up to a Pixel 9 to take advantage of all those shiny new AI processing advances. Hopefully, we’ll have a better chance of catching when they say, “So long and thanks for all the fish.”

If you’re curious about other mysterious languages being deciphered by LLMs, we have you covered.

Continue reading “DolphinGemma Seeks To Speak To Dolphins”

Vibing, AI Style

This week, the hackerverse was full of “vibe coding”. If you’re not caught up on your AI buzzwords, this is the catchy name coined by [Andrej Karpathy] that refers to basically just YOLOing it with AI coding assistants. It’s the AI-fueled version of typing in what you want to StackOverflow and picking the top answers. Only, with the current state of LLMs, it’ll probably work after a while of iterating back and forth with the machine.

It’s a tempting vision, and it probably works for a lot of simple applications, in popular languages, or generally where the ground is already well trodden. And where the stakes are low, as [Al Williams] pointed out while we were talking about vibing on the podcast. Can you imagine vibe-coded ATM software that probably gives you the right amount of money? Vibe-coding automotive ECU software?

While vibe coding seems very liberating and hands-off, it really just changes the burden of doing the coding yourself into making sure that the LLM is giving you what you want, and when it doesn’t, refining your prompts until it does. It’s more like editing and auditing code than authoring it. And while we have no doubt that a stellar programmer like [Karpathy] can verify that he’s getting what he wants, write the correct unit tests, and so on, we’re not sure it’s the panacea that is being proclaimed for folks who don’t already know how to code.

Vibe coding should probably be reserved for people who already are expert coders, and for trivial projects. Just the way you wouldn’t let grade-school kids use calculators until they’ve mastered the basics of math by themselves, you shouldn’t let junior programmers vibe code: It simultaneously demands too much knowledge to corral the LLM, while side-stepping any of the learning that would come from doing it yourself.

And then there’s the security side of vibe coding, which opens up a whole attack surface. If the LLM isn’t up to industry standards on simple things like input sanitization, your vibed code probably shouldn’t be anywhere near the Internet.

So should you be vibing? Sure! If you feel competent overseeing what [Dan] described as “the worst summer intern ever”, and the states are low, then it’s absolutely a fun way to kick the tires and see what the tools are capable of. Just go into it all with reasonable expectations.

Two laptops, side by side, running Llama2 in DOS.

Will It Run Llama 2? Now DOS Can

Will a 486 run Crysis? No, of course not. Will it run a large language model (LLM)? Given the huge buildout of compute power to do just that, many people would scoff at the very notion. But [Yeo Kheng Meng] is not many people.

He has set up various DOS computers to run a stripped down version of the Llama 2 LLM, originally from Meta. More specifically, [Yeo Kheng Meng] is implementing [Andreq Karpathy]’s Llama2.c library, which we have seen here before, running on Windows 98.

Llama2.c is a wonderful bit of programming that lets one inference a trained Llama2 model in only seven hundred lines of C. It it is seven hundred lines of modern C, however, so porting to DOS 6.22 and the outdated i386 architecture took some doing. [Yeo Kheng Meng] documents that work, and benchmarks a few retrocomputers. As painful as it may be to say — yes, a 486 or a Pentium 1 can now be counted as “retro”.

The models are not large, of course, with TinyStories-trained  260 kB model churning out a blistering 2.08 tokens per second on a generic 486 box. Newer machines can run larger models faster, of course. Ironically a Pentium M Thinkpad T24 (was that really 21 years ago?) is able to run a larger 110 Mb model faster than [Yeo Kheng Meng]’s modern Ryzen 5 desktop. Not because the Pentium M is going blazing fast, mind you, but because a memory allocation error prevented that model from running on the modern CPU. Slow and steady finishes the race, it seems.

This port will run on any 32-bit i386 hardware, which leaves the 16-bit regime as the next challenge. If one of you can get an Llama 2 hosted locally on an 286 or a 68000-based machine, then we may have to stop asking “Does it run DOOM?” and start asking “Will it run an LLM?”

Continue reading “Will It Run Llama 2? Now DOS Can”

DIY AI Butler Is Simpler And More Useful Than Siri

[Geoffrey Litt] shows that getting an effective digital assistant that’s tailored to one’s own needs just needs a little DIY, and thanks to the kinds of tools that are available today, it doesn’t even have to be particularly complex. Meet Stevens, the AI assistant who provides the family with useful daily briefs. The back end? Little more than one SQLite table and a few cron jobs.

A sample of Stevens’ notebook entries, both events and things to simply remember.

Every day, Stevens sends a daily brief via Telegram that includes calendar events, appointments, weather notes, reminders, and even a fun fact for the day. Stevens isn’t just send-only, either. Users can add new entries or ask questions about items through Telegram.

It’s rudimentary, but [Geoffrey] already finds it far more useful than Siri. This is unsurprising, as it has been astutely observed that big tech’s digital assistants are designed to serve their makers rather than their users. Besides, it’s also fun to have the freedom to give an assistant its own personality, something existing offerings sorely lack.

Architecture-wise, the assistant has a notebook (the single SQLite table) that gets populated with entries. These entries come from things like reading family members’ Google calendars, pulling data from a public weather API, processing delivery notices from the post office, and Telegram conversations. With a notebook of such entries (along with a date the entry is expected to be relevant), generating a daily brief is simple. After all, LLMs (Large Language Models) are amazingly good at handling and formatting natural language. That’s something even a locally-installed LLM can do with ease.

[Geoffrey] says that even this simple architecture is super useful, and it’s not even a particularly complex system. He encourages anyone who’s interested to check out his project, and see for themselves how useful even a minimally-informed assistant can be when it’s designed with ones’ own needs in mind.

A flowchart demonstrating the exploit described.

Vibe Check: False Packages A New LLM Security Risk?

Lots of people swear by large-language model (LLM) AIs for writing code. Lots of people swear at them. Still others may be planning to exploit their peculiarities, according to [Joe Spracklen] and other researchers at USTA. At least, the researchers have found a potential exploit in ‘vibe coding’.

Everyone who has used an LLM knows they have a propensity to “hallucinate”– that is, to go off the rails and create plausible-sounding gibberish. When you’re vibe coding, that gibberish is likely to make it into your program. Normally, that just means errors. If you are working in an environment that uses a package manager, however (like npm in Node.js, or PiPy in Python, CRAN in R-studio) that plausible-sounding nonsense code may end up calling for a fake package.

A clever attacker might be able to determine what sort of false packages the LLM is hallucinating, and inject them as a vector for malicious code. It’s more likely than you think– while CodeLlama was the worst offender, the most accurate model tested (ChatGPT4) still generated these false packages at a rate of over 5%. The researchers were able to come up with a number of mitigation strategies in their full paper, but this is a sobering reminder that an AI cannot take responsibility. Ultimately it is up to us, the programmers, to ensure the integrity and security of our code, and of the libraries we include in it.

We just had a rollicking discussion of vibe coding, which some of you seemed quite taken with. Others agreed that ChatGPT is the worst summer intern ever.  Love it or hate it, it’s likely this won’t be the last time we hear of security concerns brought up by this new method of programming.

Special thanks to [Wolfgang Friedrich] for sending this into our tip line.