A Live ISO For Those Vibe Coding Experiments

Vibe coding is all the rage at the moment if you follow certain parts of the Internet. It’s very easy to dunk upon it, whether it’s to mock the sea of people who’ve drunk the Kool-Aid and want the magic machine to make them a million dollar app with no work, or the vibe coded web apps with security holes you could drive a bus through.

But AI-assisted coding is now a thing that will stick around whether you like it or not, and there are many who want to dip a toe in the water to see what the fuss is about. For those who don’t quite trust the magic machines in their inner sanctum, [jscottmiller] is here with Clix, a bootable live Linux environment which puts Claude Code safely in a sandbox away from your family silver.

Physically it’s a NixOS live USB image with the Sway tiling Wayland compositor, and as he puts it: “Claude Code ready to go”. It has a shared partition for swapping files with Windows or macOS machines, and it’s persistent. The AI side of it has permissive settings, which means the mechanical overlord can reach parts of the OS you wouldn’t normally let it anywhere near; the point of having it in a live environment in the first place.

We can see the attraction of using an environment such as this one for experimenting without commitment, but we’d be interested to hear your views in the comments. It’s about a year since we asked you all about vibe coding, has the art moved forward in that time?

Building A Dependency-Free GPT On A Custom OS

The construction of a large language model (LLM) depends on many things: banks of GPUs, vast reams of training data, massive amounts of power, and matrix manipulation libraries like Numpy. For models with lower requirements though, it’s possible to do away with all of that, including the software dependencies. As someone who’d already built a full operating system as a C learning project, [Ethan Zhang] was no stranger to intimidating projects, and as an exercise in minimalism, he decided to build a generative pre-trained transformer (GPT) model in the kernel space of his operating system.

As with a number of other small demonstration LLMs, this was inspired by [Andrej Karpathy]’s MicroGPT, specifically by its lack of external dependencies. The first step was to strip away every unnecessary element from MooseOS, the operating system [Ethan] had previously written, including the GUI, most drivers, and the filesystem. All that’s left is the kernel, and KernelGPT runs on this. To get around the lack of a filesystem, the training data was converted into a header to keep it in memory — at only 32,000 words, this was no problem. Like the original MicroGPT, this is trained on a list of names, and predicts new names. Due to some hardware issues, [Ethan] hasn’t yet been able to test this on a physical computer, but it does work in QEMU.

It’s quite impressive to see such a complex piece of software written solely in C, running directly on hardware; for a project which takes the same starting point and goes in the opposite direction, check out this browser-based implementation of MicroGPT. For more on the math behind GPTs, check out this visualization.

Continue reading “Building A Dependency-Free GPT On A Custom OS”

AI Assistant Uses ESP32

Having an AI assistant is all the rage these days, but AI assistants usually don’t know about your automation setups and may have difficulty dealing with tasks asynchronously. Enter zclaw. It gives you the option to have a personal assistant on an ESP32 backed by Anthropic, OpenAI, or OpenRouter. The whole thing fits in 888KB, and while it doesn’t host the LLM, it does add key capabilities to monitor and control devices connected to the ESP32.

You communicate with the assistant via telegram. You can say things like “Remember the garage sensor is on GPIO 4.” Then later you might say: “In 20 minutes, check the garage sensor and if it is high, set GPIO 5 low.” It has an RTOS for scheduling tasks and is aware of the timezone and common periods. Memory persists across reboots, and you can pick different personas.

Continue reading “AI Assistant Uses ESP32”

A grim reaper knocking on a door labelled "open source"

What About The Droid Attack On The Repos?

You might not have noticed, but we here at Hackaday are pretty big fans of Open Source — software, hardware, you name it. We’ve also spilled our fair share of electronic ink on things people are doing with AI. So naturally when [Jeff Geerling] declares on his blog (and in a video embedded below) that AI is destroying open source, well, we had to take a look.

[Jeff]’s article highlights a problem he and many others who manage open source projects have noticed: they’re getting flooded with agenetic slop pull requests (PRs). It’s now to the point that GitHub will let you turn off PRs completely, at which point you’ve given up a key piece of the ‘hub’s functionality. That ability to share openly with everyone seemed like a big source of strength for open source projects, but [Jeff] here is joining his voice with others like [Daniel Stenberg] of curl fame, who has dropped bug bounties over a flood of spurious AI-generated PRs.

It’s a problem for maintainers, to be sure, but it’s as much a human problem as an AI one. After all, someone set up that AI agent and pointed at your PRs. While changing the incentive structure– like removing bug bounties– might discourage such actions, [Jeff] has no bounties and the same problem. Ultimately it may be necessary for open source projects to become a little less open, only allowing invited collaborators to submit PRs, which is also now an option on GitHub.

Combine invitation-only access with a strong policy against agenetic AI and LLM code, and you can still run a quality project. The cost of such actions is that the random user with no connection to the project can no longer find and squash bugs. As unlikely as that sounds, it happens! Rather, it did. If the random user is just going to throw their AI agent at the problem, it’s not doing anybody any good.

First they came for our RAM, now they’re here for our repos. If it wasn’t for getting distracted by the cute cat pictures we might just start to think vibe coding could kill open source. Extra bugs was bad enough, but now we can’t even trust the PRs to help us squash them!

Continue reading “What About The Droid Attack On The Repos?”

MicroGPT Lets You Peek With Your Browser

Regardless of what you think of GPT and the associated AI hype, you have to admit that it is probably here to stay, at least in some form. But how, exactly, does it work? Well, MicroGPT will show you a very stripped-down model in your browser. But it isn’t just another chatbot, it exposes all of its internal computations as it works.

The whole thing, of course, is highly simplified since you don’t want billions of parameters in your browser’s user interface. There is a tutorial, and we’d suggest starting with that. The output resembles names by understanding things like common starting letters and consonant-vowel alternation.

At the start of the tutorial, the GPT spits out random characters. Then you click the train button. You’ll see a step counter go towards 500, and the loss drops as the model learns. After 500 or so passes, the results are somewhat less random. You can click on any block in the right pane to see an explanation of how it works and its current state. You can also adjust parameters such as the number of layers and other settings.

Of course, the more training you do, the better the results, but you might also want to adjust the parameters to see how things get better or worse. The main page also proposes questions such as “What does a cell in the weight heatmap mean?” If you open the question, you’ll see the answer.

Overall, this is a great study aid. If you want a deeper dive than the normal hand-waving about how GPTs work, we still like the paper from [Stephen Wolfram], which is detailed enough to be worth reading, but not so detailed that you have to commit a few years to studying it.

We’ve seen a fairly complex GPT in a spreadsheet, if that is better for you.

Microsoft Uses Plagiarized AI Slop Flowchart To Explain How Git Works

It’s becoming somewhat of a theme that machine-generated content – whether it’s code, text or graphics – keeps pushing people to their limits, mostly by how such ‘AI slop’ is generally of outrageously poor quality, but as in the case of [Vincent Driessen] there’s also a clear copyright infringement angle involved. Recently he found that Microsoft had bastardized a Git explainer graphic which he had in 2010 painstakingly made by hand, with someone at Microsoft slapping it on a Microsoft Learn explainer article pertaining to GitHub.

As noted in a PC Gamer article on this clear faux pas, Microsoft has since quietly removed the graphic and replaced it with something possibly less AI slop, but with zero comment, and so far no response to a request for comment by PC Gamer. Of course, The Internet Archive always remembers.

What’s probably most vexing is that the ripped-off diagram isn’t even particularly good, as it has all the hallmarks of AI slop graphics: from the nonsensical arrows that got added or modified, to heavily mutilated text including changing ‘Time’ to ‘Tim’ and ‘continuously merged’ into ‘continvuocly morged’. This makes it obvious that whoever put the graphic on the Microsoft Learn page either didn’t bother to check, or that no human was involved in generating said page.

Continue reading “Microsoft Uses Plagiarized AI Slop Flowchart To Explain How Git Works”

The Requirements Of AI

The media is full of breathless reports that AI can now code and human programmers are going to be put out to pasture. We aren’t convinced. In fact, we think the “AI revolution” is just a natural evolution that we’ve seen before. Consider, for example, radios. Early on, if you wanted to have a radio, you had to build it. You may have even had to fabricate some or all of the parts. Even today, winding custom coils for a radio isn’t that unusual.

But radios became more common. You can buy the parts you need. You can even buy entire radios on an IC. You can go to the store and buy a radio that is probably better than anything you’d cobble together yourself. Even with store-bought equipment, tuning a ham radio used to be a technically challenging task. Now, you punch a few numbers in on a keypad.

The Human Element

What this misses, though, is that there’s still a human somewhere in the process. Just not as many. Someone has to design that IC. Someone has to conceive of it to start with. We doubt, say, the ENIAC or EDSAC was hand-wired by its designers. They figured out what they wanted, and an army of technicians probably did the work. Few, if any, of them could have envisoned the machine, but they can build it.

Does that make the designers less? No. If you write your code with a C compiler, should assembly programmers look down on you as inferior? Of course, they probably do, but should they?

If you have ever done any programming for most parts of the government and certain large companies, you probably know that system engineering is extremely important in those environments. An architect or system engineer collects requirements that have very formal meanings. Those requirements are decomposed through several levels. At the end, any competent programmer should be able to write code to meet the requirements. The requirements also provide a good way to test the end product.

Continue reading “The Requirements Of AI”