A flowchart demonstrating the exploit described.

Vibe Check: False Packages A New LLM Security Risk?

Lots of people swear by large-language model (LLM) AIs for writing code. Lots of people swear at them. Still others may be planning to exploit their peculiarities, according to [Joe Spracklen] and other researchers at USTA. At least, the researchers have found a potential exploit in ‘vibe coding’.

Everyone who has used an LLM knows they have a propensity to “hallucinate”– that is, to go off the rails and create plausible-sounding gibberish. When you’re vibe coding, that gibberish is likely to make it into your program. Normally, that just means errors. If you are working in an environment that uses a package manager, however (like npm in Node.js, or PiPy in Python, CRAN in R-studio) that plausible-sounding nonsense code may end up calling for a fake package.

A clever attacker might be able to determine what sort of false packages the LLM is hallucinating, and inject them as a vector for malicious code. It’s more likely than you think– while CodeLlama was the worst offender, the most accurate model tested (ChatGPT4) still generated these false packages at a rate of over 5%. The researchers were able to come up with a number of mitigation strategies in their full paper, but this is a sobering reminder that an AI cannot take responsibility. Ultimately it is up to us, the programmers, to ensure the integrity and security of our code, and of the libraries we include in it.

We just had a rollicking discussion of vibe coding, which some of you seemed quite taken with. Others agreed that ChatGPT is the worst summer intern ever.  Love it or hate it, it’s likely this won’t be the last time we hear of security concerns brought up by this new method of programming.

Special thanks to [Wolfgang Friedrich] for sending this into our tip line.

A humanoid robot packs a lunch bag in the kitchen

Gemini 2.0 + Robotics = Slam Dunk?

Over on the Google blog [Joel Meares] explains how Google built the new family of Gemini Robotics models.

The bi-arm ALOHA robot equipped with Gemini 2.0 software can take general instructions and then respond dynamically to its environment as it carries out its tasks. This family of robots aims to be highly dexterous, interactive, and general-purpose by applying the sort of non-task-specific training methods that have worked so well with LLMs, and applying them to robot tasks.

There are two things we here at Hackaday are wondering. Is there anything a robot will never do? And just how cherry-picked are these examples in the slick video? Let us know what you think in the comments!

Continue reading “Gemini 2.0 + Robotics = Slam Dunk?”

Ask Hackaday: Vibe Coding

Vibe coding is the buzzword of the moment. What is it? The practice of writing software by describing the problem to an AI large language model and using the code it generates. It’s not quite as simple as just letting the AI do your work for you because the developer is supposed to spend time honing and testing the result, and its proponents claim it gives a much more interactive and less tedious coding experience. Here at Hackaday, we are pleased to see the rest of the world catch up, because back in 2023, we were the first mainstream hardware hacking news website to embrace it, to deal with a breakfast-related emergency.

Jokes aside, though, the fad for vibe coding is something which should be taken seriously, because it’s seemingly being used in enough places that vibe coded software will inevitably affect our lives.  So here’s the Ask Hackaday: is this a clever and useful tool for making better software more quickly, or a dangerous tool for creating software nobody quite understands, containing bugs which could cause a disaster?

Our approach to writing software has always been one of incrementally building something from the ground up, which satisfies the need. Readers will know that feeling of being in touch with how a project works at all levels, with a nose for immediately diagnosing any problems that might occur. If an AI writes the code for us, the feeling is that we might lose that connection, and inevitably this will lead to less experienced coders quickly getting out of their depth. Is this pessimism, or the grizzled voice of experience? We’d love to know your views in the comments. Are our new AI overlords the new senior developers? Or are they the worst summer interns ever?

Cloudflare’s AI Labyrinth Wants Bad Bots To Get Endlessly Lost

Cloudflare has gotten more active in its efforts to identify and block unauthorized bots and AI crawlers that don’t respect boundaries. Their solution? AI Labyrinth, which uses generative AI to efficiently create a diverse maze of data as a defensive measure.

This is an evolution of efforts to thwart bots and AI scrapers that don’t respect things like “no crawl” directives, which accounts for an ever-growing amount of traffic. Last year we saw Cloudflare step up their game in identifying and blocking such activity, but the whole thing is akin to an arms race. Those intent on hoovering up all the data they can are constantly shifting tactics in response to mitigations, and simply identifying bad actors with honeypots and blocking them doesn’t really do the job any more. In fact, blocking requests mainly just alerts the baddies to the fact they’ve been identified.

Instead of blocking requests, Cloudflare goes in the other direction and creates an all-you-can-eat sprawl of linked AI-generated content, luring crawlers into wasting their time and resources as they happily process an endless buffet of diverse facts unrelated to the site being crawled, all while Cloudflare learns as much about them as possible.

That’s an important point: the content generated by the Labyrinth might be pointless and irrelevant, but it isn’t nonsense. After all, the content generated by the Labyrinth can plausibly end up in training data, and fraudulent data would essentially be increasing the amount of misinformation online as a side effect. For that reason, the human-looking data making up the Labyrinth isn’t wrong, it’s just useless.

It’s certainly a clever method of dealing with crawlers, but the way things are going it’ll probably be rendered obsolete sooner rather than later, as the next move in the arms race gets made.

How To Use LLMs For Programming Tasks

[Simon Willison] has put together a list of how, exactly, one goes about using a large language models (LLM) to help write code. If you have wondered just what the workflow and techniques look like, give it a read. It’s full of examples, strategies, and useful tips for effectively using AI assistants like ChatGPT, Claude, and others to do useful programming work.

It’s a very practical document, with [Simon] emphasizing realistic expectations and the importance of managing context (both in terms of giving the LLM direction, as well as the model’s context in terms of being mindful of how much the LLM can fit in its ‘head’ at once.) It is useful to picture an LLM as a capable and obedient but over-confident programming intern or assistant, albeit one that never gets bored or annoyed. Useful work can be done, but testing is crucial and human oversight simply cannot be automated away.

Even if one has no interest in using LLMs to help in writing production code, there’s still a lot of useful work they can do to speed up the process of software development in general, especially when learning. They can help research options, interactively explore unfamiliar codebases, or prototype ideas quickly. [Simon] provides useful strategies for all these, and more.

If you have wondered how exactly glorified chatbots can meaningfully help with software development, [Simon]’s writeup hopefully gives you some new ideas. And if this is is all leaving you curious about how exactly LLMs work, in the time it takes to enjoy a warm coffee you can learn how they do what they do, no math required.

A blue-gloved hand holds a glass plate with a small off-white rectangular prism approximately one quarter the area of a fingernail in cross-section.

AI Helps Researchers Discover New Structural Materials

Nanostructured metamaterials have shown a lot of promise in what they can do in the lab, but often have fatal stress concentration factors that limit their applications. Researchers have now found a strong, lightweight nanostructured carbon. [via BGR]

Using a multi-objective Bayesian optimization (MBO) algorithm trained on finite element analysis (FEA) datasets to identify the best candidate nanostructures, the researchers then brought the theoretical material to life with 2 photon polymerization (2PP) photolithography. The resulting “carbon nanolattices achieve the compressive strength of carbon steels (180–360 MPa) with the density of Styrofoam (125–215 kg m−3) which exceeds the specific strengths of equivalent low-density materials by over an order of magnitude.”

While you probably shouldn’t start getting investors for your space elevator startup just yet, lighter materials like this are promising for a lot of applications, most notably more conventional aviation where fuel (or energy) prices are a big constraint on operations. As with any lab results, more work is needed until we see this in the real world, but it is nice to know that superalloys and composites aren’t the end of the road for strong and lightweight materials.

We’ve seen AI help identify battery materials already and this seems to be one avenue where generative AI isn’t just about making embarrassing photos or making us less intelligent.

USB Stick Hides Large Language Model

Large language models (LLMs) are all the rage in the generative AI world these days, with the truly large ones like GPT, LLaMA, and others using tens or even hundreds of billions of parameters to churn out their text-based responses. These typically require glacier-melting amounts of computing hardware, but the “large” in “large language models” doesn’t really need to be that big for there to be a functional, useful model. LLMs designed for limited hardware or consumer-grade PCs are available now as well, but [Binh] wanted something even smaller and more portable, so he put an LLM on a USB stick.

This USB stick isn’t just a jump drive with a bit of memory on it, though. Inside the custom 3D printed case is a Raspberry Pi Zero W running llama.cpp, a lightweight, high-performance version of LLaMA. Getting it on this Pi wasn’t straightforward at all, though, as the latest version of llama.cpp is meant for ARMv8 and this particular Pi was running the ARMv6 instruction set. That meant that [Binh] needed to change the source code to remove the optimizations for the more modern ARM machines, but with a week’s worth of effort spent on it he finally got the model on the older Raspberry Pi.

Getting the model to run was just one part of this project. The rest of the build was ensuring that the LLM could run on any computer without drivers and be relatively simple to use. By setting up the USB device as a composite device which presents a filesystem to the host computer, all a user has to do to interact with the LLM is to create an empty text file with a filename, and the LLM will automatically fill the file with generated text. While it’s not blindingly fast, [Binh] believes this is the first plug-and-play USB-based LLM, and we’d have to agree. It’s not the least powerful computer to ever run an LLM, though. That honor goes to this project which is able to cram one on an ESP32.

Continue reading “USB Stick Hides Large Language Model”