So Long, Firefox, Part One

It’s likely that Hackaday readers have among them a greater than average number of people who can name one special thing they did on September 23rd, 2002. On that day a new web browser was released, Phoenix version 0.1, and it was a lightweight browser-only derivative of the hugely bloated Mozilla suite. Renamed a few times to become Firefox, it rose to challenge the once-mighty Microsoft Internet Explorer, only to in turn be overtaken by Google’s Chrome.

Now in 2025 it’s a minority browser with an estimated market share just over 2%, and it’s safe to say that Mozilla’s take on AI and the use of advertising data has put them at odds with many of us who’ve kept the faith since that September day 23 years ago. Over the last few months I’ve been actively chasing alternatives, and it’s with sadness that in November 2025, I can finally say I’m Firefox-free.

Continue reading “So Long, Firefox, Part One”

Kubernetes Cluster Goes Mobile In Pet Carrier

There’s been a bit of a virtualization revolution going on for the last decade or so, where tools like Docker and LXC have made it possible to quickly deploy server applications without worrying much about dependency issues. Of course as these tools got adopted we needed more tools to scale them easily. Enter Kubernetes, a container orchestration platform that normally herds fleets of microservices in sprawling cloud architectures, but it turns out it’s perfectly happy running on a tiny computer stuffed in a cat carrier.

This was a build for the recent Kubecon in Atlanta, and the project’s creator [Justin] wanted it to have an AI angle to it since the core compute in the backpack is an NVIDIA DGX Spark. When someone scans the QR code, the backpack takes a picture and then runs it through a two-node cluster on the Spark running a local AI model that stylizes the picture and sends it back to the user. Only the AI workload runs on the Spark; [Justin] also is using a LattePanda to handle most of everything else rather than host everything on the Spark.

To get power for the mobile cluster [Justin] is using a small power bank, and with that it gets around three hours of use before it needs to be recharged. Originally it was planned to work on the WiFi at the conference as well but this was unreliable and he switched to using a USB tether to his phone. It was a big hit with the conference goers though, with people using it around every ten minutes while he had it on his back. Of course you don’t need a fancy NVIDIA product to run a portable kubernetes cluster. You can always use a few old phones to run one as well.

Continue reading “Kubernetes Cluster Goes Mobile In Pet Carrier”

Hackaday Links Column Banner

Hackaday Links: November 16, 2025

We make no claims to be an expert on anything, but we do know that rule number one of working with big, expensive, mission-critical equipment is: Don’t break the big, expensive, mission-critical equipment. Unfortunately, though, that’s just what happened to the Deep Space Network’s 70-meter dish antenna at Goldstone, California. NASA announced the outage this week, but the accident that damaged the dish occurred much earlier, in mid-September. DSS-14, as the antenna is known, is a vital part of the Deep Space Network, which uses huge antennas at three sites (Goldstone, Madrid, and Canberra) to stay in touch with satellites and probes from the Moon to the edge of the solar system. The three sites are located roughly 120 degrees apart on the globe, which gives the network full coverage of the sky regardless of the local time.

Continue reading “Hackaday Links: November 16, 2025”

“AI, Make Me A Degree Certificate”

One of the fun things about writing for Hackaday is that it takes you to the places where our community hang out. I was in a hackerspace in a university town the other evening, busily chasing my end of month deadline as no doubt were my colleagues at the time too. In there were a couple of others, a member who’s an electronic engineering student at one of the local universities, and one of their friends from the same course. They were working on the hardware side of a group project, a web-connected device which with a team of several other students, and they were creating from sensor to server to screen.

I have a lot of respect for my friend’s engineering abilities, I won’t name them but they’ve done a bunch of really accomplished projects, and some of them have even been featured here by my colleagues. They are already a very competent engineer indeed, and when in time they receive the bit of paper to prove it, they will go far. The other student was immediately apparent as being cut from the same cloth, as people say in hackerspaces, “one of us”.

They were making great progress with the hardware and low-level software while they were there, but I was saddened at their lament over their colleagues. In particular it seemed they had a real problem with vibe coding: they estimated that only a small percentage of their classmates could code by hand as they did, and the result was a lot of impenetrable code that looked good, but often simply didn’t work.

I came away wondering not how AI could be used to generate such poor quality work, but how on earth this could be viewed as acceptable in a university.
Continue reading ““AI, Make Me A Degree Certificate””

Expert Systems: The Dawn Of AI

We’ll be honest. If you had told us a few decades ago we’d teach computers to do what we want, it would work some of the time, and you wouldn’t really be able to explain or predict exactly what it was going to do, we’d have thought you were crazy. Why not just get a person? But the dream of AI goes back to the earliest days of computers or even further, if you count Samuel Butler’s letter from 1863 musing on machines evolving into life, a theme he would revisit in the 1872 book Erewhon.

Of course, early real-life AI was nothing like you wanted. Eliza seemed pretty conversational, but you could quickly confuse the program. Hexapawn learned how to play an extremely simplified version of chess, but you could just as easily teach it to lose.

But the real AI work that looked promising was the field of expert systems. Unlike our current AI friends, expert systems were highly predictable. Of course, like any computer program, they could be wrong, but if they were, you could figure out why.

Experts?

As the name implies, expert systems drew from human experts. In theory, a specialized person known as a “knowledge engineer” would work with a human expert to distill his or her knowledge down to an essential form that the computer could handle.

This could range from the simple to the fiendishly complex, and if you think it was hard to do well, you aren’t wrong. Before getting into details, an example will help you follow how it works.

Continue reading “Expert Systems: The Dawn Of AI”

Nanochat Lets You Build Your Own Hackable LLM

Few people know LLMs (Large Language Models) as thoroughly as [Andrej Karpathy], and luckily for us all he expresses that in useful open-source projects. His latest is nanochat, which he bills as a way to create “the best ChatGPT $100 can buy”.

What is it, exactly? nanochat in a minimal and hackable software project — encapsulated in a single speedrun.sh script — for creating a simple ChatGPT clone from scratch, including web interface. The codebase is about 8,000 lines of clean, readable code with minimal dependencies, making every single part of the process accessible to be tampered with.

An accessible, end-to-end codebase for creating a simple ChatGPT clone makes every part of the process hackable.

The $100 is the cost of doing the computational grunt work of creating the model, which takes about 4 hours on a single NVIDIA 8XH100 GPU node. The result is a 1.9 billion parameter micro-model, trained on some 38 billion tokens from an open dataset. This model is, as [Andrej] describes in his announcement on X, a “little ChatGPT clone you can sort of talk to, and which can write stories/poems, answer simple questions.” A walk-through of what that whole process looks like makes it as easy as possible to get started.

Unsurprisingly, a mere $100 doesn’t create a meaningful competitor to modern commercial offerings. However, significant improvements can be had by scaling up the process. A $1,000 version (detailed here) is far more coherent and capable; able to solve simple math or coding problems and take multiple-choice tests.

[Andrej Karpathy]’s work lends itself well to modification and experimentation, and we’re sure this tool will be no exception. His past work includes a method of training a GPT-2 LLM using only pure C code, and years ago we saw his work on a character-based Recurrent Neural Network (mis)used to generate baroque music by cleverly representing MIDI events as text.

Your LLM Won’t Stop Lying Any Time Soon

Researchers call it “hallucination”; you might more accurately refer to it as confabulation, hornswaggle, hogwash, or just plain BS. Anyone who has used an LLM has encountered it; some people seem to find it behind every prompt, while others dismiss it as an occasional annoyance, but nobody claims it doesn’t happen. A recent paper by researchers at OpenAI (PDF) tries to drill down a bit deeper into just why that happens, and if anything can be done.

Spoiler alert: not really. Not unless we completely re-think the way we’re training these models, anyway. The analogy used in the conclusion is to an undergraduate in an exam room. Every right answer is going to get a point, but wrong answers aren’t penalized– so why the heck not guess? You might not pass an exam that way going in blind, but if you have studied (i.e., sucked up the entire internet without permission for training data) then you might get a few extra points. For an LLM’s training, like a student’s final grade, every point scored on the exam is a good point. Continue reading “Your LLM Won’t Stop Lying Any Time Soon”