An illustration of two translucent blue hands knitting a DNA double helix of yellow, green, and red base pairs from three colors of yarn. Text in white to the left of the hands reads: "Evo 2 doesn't just copy existing DNA -- it creates truly new sequences not found in nature that scientists can test for useful properties."

LLMs Coming For A DNA Sequence Near You

While tools like CRISPR have blown the field of genome hacking wide open, being able to predict what will happen when you tinker with the code underlying the living things on our planet is still tricky. Researchers at Stanford hope their new Evo 2 DNA generative AI tool can help.

Trained on a dataset of over 100,000 organisms from bacteria to humans, the system can quickly determine what mutations contribute to certain diseases and what mutations are mostly harmless. An “area we are hopeful about is using Evo 2 for designing new genetic sequences with specific functions of interest.”

To that end, the system can also generate gene sequences from a starting prompt like any other LLM as well as cross-reference the results to see if the sequence already occurs in nature to aid in predicting what the sequence might do in real life. These synthetic sequences can then be made using CRISPR or similar techniques in the lab for testing. While the prospect of building our own Moya is exciting, we do wonder what possible negative consequences could come from this technology, despite the hand-wavy mention of not training the model on viruses to “to prevent Evo 2 from being used to create new or more dangerous diseases.”

We’ve got you covered if you need to get your own biohacking space setup for DNA gels or if you want to find out more about powering living computers using electricity. If you’re more curious about other interesting uses for machine learning, how about a dolphin translator or discovering better battery materials?

A black and blue swirl background with the logo of a blue dolphin over the word DolphinGemma with dolphin in white and Gemma in blue

DolphinGemma Seeks To Speak To Dolphins

Most people have wished for the ability to talk to other animals at some point, until they realized their cat would mostly insult them and ask for better service, but researchers are getting closer to a dolphin translator.

DolphinGemma is an upcoming LLM based on the recordings from the Wild Dolphin Project. Using the hours and hours of dolphin sounds recorded by researchers over the decades, the hope is that the LLM will allow us to communicate more effectively with the second most intelligent species on the planet.

The LLM is designed to run in the field on Google Pixel phones, due to it being based on Google’s in-house Gemini product, which is a bit less cumbersome than hauling a mainframe on a dive. The Wild Dolphin Project currently uses the Georgia Tech developed CHAT (Cetacean Hearing Augmentation Telemetry) device which has a Pixel 6 at its heart, but the newer system will be bumped up to a Pixel 9 to take advantage of all those shiny new AI processing advances. Hopefully, we’ll have a better chance of catching when they say, “So long and thanks for all the fish.”

If you’re curious about other mysterious languages being deciphered by LLMs, we have you covered.

Continue reading “DolphinGemma Seeks To Speak To Dolphins”

Two laptops, side by side, running Llama2 in DOS.

Will It Run Llama 2? Now DOS Can

Will a 486 run Crysis? No, of course not. Will it run a large language model (LLM)? Given the huge buildout of compute power to do just that, many people would scoff at the very notion. But [Yeo Kheng Meng] is not many people.

He has set up various DOS computers to run a stripped down version of the Llama 2 LLM, originally from Meta. More specifically, [Yeo Kheng Meng] is implementing [Andreq Karpathy]’s Llama2.c library, which we have seen here before, running on Windows 98.

Llama2.c is a wonderful bit of programming that lets one inference a trained Llama2 model in only seven hundred lines of C. It it is seven hundred lines of modern C, however, so porting to DOS 6.22 and the outdated i386 architecture took some doing. [Yeo Kheng Meng] documents that work, and benchmarks a few retrocomputers. As painful as it may be to say — yes, a 486 or a Pentium 1 can now be counted as “retro”.

The models are not large, of course, with TinyStories-trained  260 kB model churning out a blistering 2.08 tokens per second on a generic 486 box. Newer machines can run larger models faster, of course. Ironically a Pentium M Thinkpad T24 (was that really 21 years ago?) is able to run a larger 110 Mb model faster than [Yeo Kheng Meng]’s modern Ryzen 5 desktop. Not because the Pentium M is going blazing fast, mind you, but because a memory allocation error prevented that model from running on the modern CPU. Slow and steady finishes the race, it seems.

This port will run on any 32-bit i386 hardware, which leaves the 16-bit regime as the next challenge. If one of you can get an Llama 2 hosted locally on an 286 or a 68000-based machine, then we may have to stop asking “Does it run DOOM?” and start asking “Will it run an LLM?”

Continue reading “Will It Run Llama 2? Now DOS Can”

DIY AI Butler Is Simpler And More Useful Than Siri

[Geoffrey Litt] shows that getting an effective digital assistant that’s tailored to one’s own needs just needs a little DIY, and thanks to the kinds of tools that are available today, it doesn’t even have to be particularly complex. Meet Stevens, the AI assistant who provides the family with useful daily briefs. The back end? Little more than one SQLite table and a few cron jobs.

A sample of Stevens’ notebook entries, both events and things to simply remember.

Every day, Stevens sends a daily brief via Telegram that includes calendar events, appointments, weather notes, reminders, and even a fun fact for the day. Stevens isn’t just send-only, either. Users can add new entries or ask questions about items through Telegram.

It’s rudimentary, but [Geoffrey] already finds it far more useful than Siri. This is unsurprising, as it has been astutely observed that big tech’s digital assistants are designed to serve their makers rather than their users. Besides, it’s also fun to have the freedom to give an assistant its own personality, something existing offerings sorely lack.

Architecture-wise, the assistant has a notebook (the single SQLite table) that gets populated with entries. These entries come from things like reading family members’ Google calendars, pulling data from a public weather API, processing delivery notices from the post office, and Telegram conversations. With a notebook of such entries (along with a date the entry is expected to be relevant), generating a daily brief is simple. After all, LLMs (Large Language Models) are amazingly good at handling and formatting natural language. That’s something even a locally-installed LLM can do with ease.

[Geoffrey] says that even this simple architecture is super useful, and it’s not even a particularly complex system. He encourages anyone who’s interested to check out his project, and see for themselves how useful even a minimally-informed assistant can be when it’s designed with ones’ own needs in mind.

GLaDOS Potato Assistant

This Potato Virtual Assistant Is Fully Baked

There are a number of reasons you might want to build your own smart speaker virtual assistant. Usually, getting your weather forecast from a snarky, malicious AI potato isn’t one of them, unless you’re a huge Portal fan like [Binh Pham].

[Binh Pham] built the potato incarnation of GLaDOS from the Portal 2 video game with the help of a ReSpeaker Light kit, an ESP32-based board designed for speech recognition and voice control, and as an interface for home assistant running on a Raspberry Pi.

He resisted the temptation to use a real potato as an enclosure and wisely opted instead to print one from a 3D file he found on Thingiverse of the original GLaDOS potato. Providing the assistant with the iconic synthetic voice of GLaDOS was a matter of repackaging an existing voice model for use with Home Assistant.

Of course all of this attention to detail would be for naught if you had to refer to the assistant as “Google” or “Alexa” to get its attention. A bit of custom modelling and on-device wake word detection, and the cyborg tuber was ready to switch lights on and off with it’s signature sinister wit.

We’ve seen a number of projects that brought Portal objects to life for fans of the franchise to enjoy, even an assistant based on another version of the GLaDOS the character. This one adds a dimension of absurdity to the collection.

Continue reading “This Potato Virtual Assistant Is Fully Baked”

The Incomplete JSON Pretty Printer (Brought To You By Vibes)

Incomplete JSON (such as from a log that terminates unexpectedly) doesn’t parse cleanly, which means anything that usually prints JSON nicely, won’t. Frustration with this is what led [Simon Willison] to make The Incomplete JSON pretty printer, a single-purpose web tool that pretty-prints JSON regardless of whether it’s complete or not.

Making a tool to solve a particular issue is a fantastic application of software, but in this case it also is a good lead-in to some thoughts [Simon] has to share about vibe coding. The incomplete JSON printer is a perfect example of vibe coding, being the product of [Simon] directing an LLM to iteratively create a tool and not looking at the actual code once.

Sometimes, however the machine decides to code something is fine.

[Simon] shares that the term “vibe coding” was first used in a social media post by [Andrej Karpathy], who we’ve seen shared a “hello world” of GPT-based LLMs as well as how to train one in pure C, both of which are the product of a deep understanding of the subject (and fantastically educational) so he certainly knows how things work.

Anyway, [Andrej] had a very specific idea he was describing with vibe coding: that of engaging with the tool in almost a state of flow for something like a weekend project, just focused on iterating one’s way to what they want without fussing the details. Why? Because doing so is new, engaging, and fun.

Since then, vibe coding as a term seems to get used to refer to any and all AI-assisted coding, a subject on which folks have quite a few thoughts (many of which were eagerly shared on a recent Ask Hackaday on the subject).

Of course human oversight is critical to a solid and reliable development workflow. But not all software is the same. In the case of the Incomplete JSON Pretty Printer, [Simon] really doesn’t care what the code actually looks like. He got it made in a short amount of time, the tool does exactly what he wants, and it’s hard to imagine the stakes being any lower. To [Simon], however the LMM decided to do things is fine, and there’s a place for that.

A flowchart demonstrating the exploit described.

Vibe Check: False Packages A New LLM Security Risk?

Lots of people swear by large-language model (LLM) AIs for writing code. Lots of people swear at them. Still others may be planning to exploit their peculiarities, according to [Joe Spracklen] and other researchers at USTA. At least, the researchers have found a potential exploit in ‘vibe coding’.

Everyone who has used an LLM knows they have a propensity to “hallucinate”– that is, to go off the rails and create plausible-sounding gibberish. When you’re vibe coding, that gibberish is likely to make it into your program. Normally, that just means errors. If you are working in an environment that uses a package manager, however (like npm in Node.js, or PiPy in Python, CRAN in R-studio) that plausible-sounding nonsense code may end up calling for a fake package.

A clever attacker might be able to determine what sort of false packages the LLM is hallucinating, and inject them as a vector for malicious code. It’s more likely than you think– while CodeLlama was the worst offender, the most accurate model tested (ChatGPT4) still generated these false packages at a rate of over 5%. The researchers were able to come up with a number of mitigation strategies in their full paper, but this is a sobering reminder that an AI cannot take responsibility. Ultimately it is up to us, the programmers, to ensure the integrity and security of our code, and of the libraries we include in it.

We just had a rollicking discussion of vibe coding, which some of you seemed quite taken with. Others agreed that ChatGPT is the worst summer intern ever.  Love it or hate it, it’s likely this won’t be the last time we hear of security concerns brought up by this new method of programming.

Special thanks to [Wolfgang Friedrich] for sending this into our tip line.