New Linux Kernel Rules Put The Onus On Humans For AI Tool Usage

It’s fair to say that the topic of so-called ‘AI coding assistants’ is somewhat controversial. With arguments against them ranging from code quality to copyright issues, there are many valid reasons to be at least hesitant about accepting their output in a project, especially one as massive as the Linux kernel. With a recent update to the Linux kernel documentation the use of these tools has now been formalized.

The upshot of the use of such Large Language Models (LLM) tools is that any commit that uses generated code has to be signed off by a human developer, and this human will ultimately bear responsibility for the code quality as well as any issues that the code may cause, including legal ones. The use of AI tools also has to be declared with the Assisted-by: tag in contributions so that their use can be tracked.

When it comes to other open source projects the approach varies, with NetBSD having banished anything tainted by ‘AI’, cURL shuttering its bug bounty program due to AI code slop, and Mesa’s developers demanding that you understand generated code which you submit, following a tragic slop-cident.

Meanwhile there are also rising concerns that these LLM-based tools may be killing open source through ‘vibe-coding’, along with legal concerns whether LLM-generated code respects the original license of the code that was ingested into the training model. Clearly we haven’t seen the end of these issues yet.

AI For The Skeptics: Pick Your Reasons To Be Excited

It’s odd being a technology writer in 2026, because around you are many people who will tell you that your craft is outdated. Like the manufacturers of buggy-whips at the turn of the twentieth century, the automobile (in the form of large language model AI) is on the market, and your business will soon be an anachronism. Adapt or go extinct, they tell you. It’s an argument I’ve found myself facing a few times over the last year in my wandering existence, and it’s forced me to think about it. What are the reasons everyone is excited about AI and are those reasons valid, what is there to be scared of, and what are the real reasons people should be excited about it?

If We Gotta Take This Seriously, How Can We Do It?

A couple in a horse drawn buggy, circa 1900ish
The futures looking bright in the buggy-whip department! Public domain.

I’ll start by repeating my tale from a few weeks ago when I asked readers what AI applications would survive when the hype is over. The reaction of a friend with decades of software experience on trying an AI coding helper stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile. I agree with her that this has the potential to be a transformative technology, and while it’s entertaining to make fun of its shortcomings as I did three years ago when the idea of what we now call vibe coding first appeared, it’s already making itself useful in some applications. Simply dismissing it is no longer appropriate, but equally, drinking freely of the Kool-Aid seems like joining yet another hype bandwagon that will inevitably derail. A middle way has to be found. Continue reading “AI For The Skeptics: Pick Your Reasons To Be Excited”

Are We Surrendering Our Thinking To Machines?

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.

Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.

The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.

Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.

Thanks to [Monika] for the tip!

So Expensive, A Caveman Can Do It

A few years back a company had an ad campaign with a discouraged caveman who was angry because the company claimed their website was “so easy, even a caveman could do it.” Maybe that inspired [JuliusBrussee] to create caveman, a tool for reducing costs when using Claude Code.

The trick is that Claude, like other LLMs, operates on tokens. Tokens aren’t quite words, but they are essentially words or word fragments. Most LLM plans also charge you by the token. So fewer tokens means lower costs. However, LLMs can be quite verbose, unless you make them talk like a caveman.

Continue reading “So Expensive, A Caveman Can Do It”

Controlling Vintage Mac OS With AI

Classic Mac OS was prized for its clean, accessible GUI when it first hit the scene in the 1980s. Back then, developers hadn’t even conceived of all the weird gewgaws that would eventually be shoehorned into modern operating systems, least of all AI agents that seem to be permeating everything these days. And yet! [SeanFDZ] found a way to cram Claude or other AI agents into the vintage Mac world.

The result of [Sean]’s work is AgentBridge, a tool for interfacing modern AI agents with vintage Mac OS (7-9). AgentBridge itself runs as an application within Mac OS. It works by reading and writing text files in a shared folder which can also be accessed by Claude or whichever AI agent is in use. AgentBridge takes commands from its “inbox”, executes them via the Mac Toolbox, and then writes outputs to its “outbox” where they can be picked up and processed by the AI agent. The specifics of how the shared folder work are up to you—you can use a network share, a shared folder in an emulation environment, or just about any other setup that lets the AI agent and AgentBridge access the same folder.

It’s hard to imagine any mainstream use cases for having a fleet of AI-controlled Macintosh SE/30s. Still, that doesn’t mean we don’t find the concept hilarious. Meanwhile, have you considered the prospect of artificial intelligence running on the Commodore 64?

Ask Hackaday: What Will An LLM Be Good For In The Plateau Of Productivity?

A friend of mine has been a software developer for most of the last five decades, and has worked with everything from 1960s mainframes to the machines of today. She recently tried AI coding tools to see what all the fuss is about, as a helper to her extensive coding experience rather than as a zero-work vibe coding tool. Her reaction stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile.

Après Nous, Le Krach

The Gartner hype cycle graph. Jeremykemp, CC BY-SA 3.0.

We are living amid a wave of AI slop and unreasonable hype so it’s an easy win to dunk on LLMs, but as the whole thing climbs towards the peak of inflated expectations on the Gartner hype cycle perhaps it’s time to look forward. The current AI hype is inevitably going to crash and burn, but what comes afterwards? The long tail of the plateau of productivity will contain those applications in which LLMs are a success, but what will they be? We have yet to hack together a working crystal ball, but perhaps it’s still time to gaze into the future. Continue reading “Ask Hackaday: What Will An LLM Be Good For In The Plateau Of Productivity?”

A grim reaper knocking on a door labelled "open source"

What About The Droid Attack On The Repos?

You might not have noticed, but we here at Hackaday are pretty big fans of Open Source — software, hardware, you name it. We’ve also spilled our fair share of electronic ink on things people are doing with AI. So naturally when [Jeff Geerling] declares on his blog (and in a video embedded below) that AI is destroying open source, well, we had to take a look.

[Jeff]’s article highlights a problem he and many others who manage open source projects have noticed: they’re getting flooded with agenetic slop pull requests (PRs). It’s now to the point that GitHub will let you turn off PRs completely, at which point you’ve given up a key piece of the ‘hub’s functionality. That ability to share openly with everyone seemed like a big source of strength for open source projects, but [Jeff] here is joining his voice with others like [Daniel Stenberg] of curl fame, who has dropped bug bounties over a flood of spurious AI-generated PRs.

It’s a problem for maintainers, to be sure, but it’s as much a human problem as an AI one. After all, someone set up that AI agent and pointed at your PRs. While changing the incentive structure– like removing bug bounties– might discourage such actions, [Jeff] has no bounties and the same problem. Ultimately it may be necessary for open source projects to become a little less open, only allowing invited collaborators to submit PRs, which is also now an option on GitHub.

Combine invitation-only access with a strong policy against agenetic AI and LLM code, and you can still run a quality project. The cost of such actions is that the random user with no connection to the project can no longer find and squash bugs. As unlikely as that sounds, it happens! Rather, it did. If the random user is just going to throw their AI agent at the problem, it’s not doing anybody any good.

First they came for our RAM, now they’re here for our repos. If it wasn’t for getting distracted by the cute cat pictures we might just start to think vibe coding could kill open source. Extra bugs was bad enough, but now we can’t even trust the PRs to help us squash them!

Continue reading “What About The Droid Attack On The Repos?”