AI For The Skeptics: The Universal Function For Some Things Only

It’s a phrase we use a lot in our community, “Drink the Kool-Aid”, meaning becoming unreasonably infatuated with a dubious idea, technology, or company. It has its origins in 1960s psychedelia, but given that it’s popularly associated with the mass suicide of the followers of Jim Jones in Guyana, perhaps we should find something else. In the sense we use it though, it has been flowing liberally of late with respect to AI, and the hype surrounding it. This series has attempted to peer behind that hype, first by examining the motives behind all that metaphorical Kool-Aid drinking, and then by demonstrating a simple example where the technology does something useful that’s hard to do another way. In that last piece we touched upon perhaps the thing that Hackaday readers should find most interesting, we saw the LLM’s possibility as a universal API for useful functions.

It’s Not What An LLM Can Make, It’s What It Can Do

When we program, we use functions all the time. In most programming languages they are built into the language or they can be user-defined. They encapsulate a piece of code that does something, so it can be repeatedly called. Life without them on an 8-bit microcomputer was painful, with many GOTO statements required to make something similar happen. It’s no accident then that when looking at an LLM as a sentiment analysis tool in the previous article I used a function GetSentimentAnalysis(subject,text) to describe what I wanted to do. The LLM’s processing capacity was a good fit to my task in hand, so I used it as the engine behind my function, taking a piece of text and a subject, and returning an integer representing sentiment. The word “do” encapsulates the point of this article, that maybe the hype has got it wrong in being all about what an LLM can make. Instead it should be all about what it can do. The people thinking they’ve struck gold because they can churn out content slop or make it send emails are missing this. Continue reading “AI For The Skeptics: The Universal Function For Some Things Only”

Can Claude Write Z80 Assembly Code?

Betteridge’s law applies, but with help and guidance by a human who knows his stuff, [Ready Z80] was able to get a functioning game of Wordle out of the French-named LLM, which is more than we expected. It’s not like the folks at Anthropic spent much time making sure 40-year-old opcodes were well represented in their training data, after all.

For hardware, [Ready Z80] is working with the TEC-1G single-board-computer, which is a retrocomputer inspired by the TEC-1 whose design was published by Australian hobbyist magazine “Talking Electronics” back in the 1980s. Claude actually seemed to know what that was, and that it only had a hex keypad — though when [Ready Z80] was quick to correct it and let the LLM know he’s using a QWERTY keyboard add-on, Claude declared it was confident in its ability to write the code.

As usual for a LLM, Claude was overconfident and tossed out some nonexistent instructions. Though admittedly, it didn’t persist in that after being corrected. It’s notable that [Ready Z80] doesn’t prompt it with “Give me an implementation of Wordle in Z80 assembly for the TEC-1G” but goes through step-by-step, explaining exactly what he wants each section of the code to do. As [Dan Maloney] reported three years ago, it’s a bit like working with a summer intern.

In the end, they get a working game, but that was never in question. [Ready Z80] reveals over the course of the video he has the chops to have written it himself. Did using Claude make that go faster? Based on studies we’ve seen, it probably felt like it, even if it may have actually slowed him down.

Continue reading “Can Claude Write Z80 Assembly Code?”

New Linux Kernel Rules Put The Onus On Humans For AI Tool Usage

It’s fair to say that the topic of so-called ‘AI coding assistants’ is somewhat controversial. With arguments against them ranging from code quality to copyright issues, there are many valid reasons to be at least hesitant about accepting their output in a project, especially one as massive as the Linux kernel. With a recent update to the Linux kernel documentation the use of these tools has now been formalized.

The upshot of the use of such Large Language Models (LLM) tools is that any commit that uses generated code has to be signed off by a human developer, and this human will ultimately bear responsibility for the code quality as well as any issues that the code may cause, including legal ones. The use of AI tools also has to be declared with the Assisted-by: tag in contributions so that their use can be tracked.

When it comes to other open source projects the approach varies, with NetBSD having banished anything tainted by ‘AI’, cURL shuttering its bug bounty program due to AI code slop, and Mesa’s developers demanding that you understand generated code which you submit, following a tragic slop-cident.

Meanwhile there are also rising concerns that these LLM-based tools may be killing open source through ‘vibe-coding’, along with legal concerns whether LLM-generated code respects the original license of the code that was ingested into the training model. Clearly we haven’t seen the end of these issues yet.

AI For The Skeptics: Pick Your Reasons To Be Excited

It’s odd being a technology writer in 2026, because around you are many people who will tell you that your craft is outdated. Like the manufacturers of buggy-whips at the turn of the twentieth century, the automobile (in the form of large language model AI) is on the market, and your business will soon be an anachronism. Adapt or go extinct, they tell you. It’s an argument I’ve found myself facing a few times over the last year in my wandering existence, and it’s forced me to think about it. What are the reasons everyone is excited about AI and are those reasons valid, what is there to be scared of, and what are the real reasons people should be excited about it?

If We Gotta Take This Seriously, How Can We Do It?

A couple in a horse drawn buggy, circa 1900ish
The futures looking bright in the buggy-whip department! Public domain.

I’ll start by repeating my tale from a few weeks ago when I asked readers what AI applications would survive when the hype is over. The reaction of a friend with decades of software experience on trying an AI coding helper stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile. I agree with her that this has the potential to be a transformative technology, and while it’s entertaining to make fun of its shortcomings as I did three years ago when the idea of what we now call vibe coding first appeared, it’s already making itself useful in some applications. Simply dismissing it is no longer appropriate, but equally, drinking freely of the Kool-Aid seems like joining yet another hype bandwagon that will inevitably derail. A middle way has to be found. Continue reading “AI For The Skeptics: Pick Your Reasons To Be Excited”

Are We Surrendering Our Thinking To Machines?

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.

Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.

The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.

Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.

Thanks to [Monika] for the tip!

So Expensive, A Caveman Can Do It

A few years back a company had an ad campaign with a discouraged caveman who was angry because the company claimed their website was “so easy, even a caveman could do it.” Maybe that inspired [JuliusBrussee] to create caveman, a tool for reducing costs when using Claude Code.

The trick is that Claude, like other LLMs, operates on tokens. Tokens aren’t quite words, but they are essentially words or word fragments. Most LLM plans also charge you by the token. So fewer tokens means lower costs. However, LLMs can be quite verbose, unless you make them talk like a caveman.

Continue reading “So Expensive, A Caveman Can Do It”

Controlling Vintage Mac OS With AI

Classic Mac OS was prized for its clean, accessible GUI when it first hit the scene in the 1980s. Back then, developers hadn’t even conceived of all the weird gewgaws that would eventually be shoehorned into modern operating systems, least of all AI agents that seem to be permeating everything these days. And yet! [SeanFDZ] found a way to cram Claude or other AI agents into the vintage Mac world.

The result of [Sean]’s work is AgentBridge, a tool for interfacing modern AI agents with vintage Mac OS (7-9). AgentBridge itself runs as an application within Mac OS. It works by reading and writing text files in a shared folder which can also be accessed by Claude or whichever AI agent is in use. AgentBridge takes commands from its “inbox”, executes them via the Mac Toolbox, and then writes outputs to its “outbox” where they can be picked up and processed by the AI agent. The specifics of how the shared folder work are up to you—you can use a network share, a shared folder in an emulation environment, or just about any other setup that lets the AI agent and AgentBridge access the same folder.

It’s hard to imagine any mainstream use cases for having a fleet of AI-controlled Macintosh SE/30s. Still, that doesn’t mean we don’t find the concept hilarious. Meanwhile, have you considered the prospect of artificial intelligence running on the Commodore 64?