AI For The Skeptics: Pick Your Reasons To Be Excited

It’s odd being a technology writer in 2026, because around you are many people who will tell you that your craft is outdated. Like the manufacturers of buggy-whips at the turn of the twentieth century, the automobile (in the form of large language model AI) is on the market, and your business will soon be an anachronism. Adapt or go extinct, they tell you. It’s an argument I’ve found myself facing a few times over the last year in my wandering existence, and it’s forced me to think about it. What are the reasons everyone is excited about AI and are those reasons valid, what is there to be scared of, and what are the real reasons people should be excited about it?

If We Gotta Take This Seriously, How Can We Do It?

A couple in a horse drawn buggy, circa 1900ish
The futures looking bright in the buggy-whip department! Public domain.

I’ll start by repeating my tale from a few weeks ago when I asked readers what AI applications would survive when the hype is over. The reaction of a friend with decades of software experience on trying an AI coding helper stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile. I agree with her that this has the potential to be a transformative technology, and while it’s entertaining to make fun of its shortcomings as I did three years ago when the idea of what we now call vibe coding first appeared, it’s already making itself useful in some applications. Simply dismissing it is no longer appropriate, but equally, drinking freely of the Kool-Aid seems like joining yet another hype bandwagon that will inevitably derail. A middle way has to be found.

It’s likely many of us will over the last couple of years met a Guy In A Suit who’s got a little too excited about ChatGPT. I think guys like him are motivated by several things; he’s impressed with that LLM because it appears really smart to him, he’s used it to make himself appear smart to other people so it’s made him feel smarter than the engineer who’s pointing out his flaws, he thinks it’s a magic bullet that can do lots of work for him and either save or make him lots of money, and perhaps most importantly, he’s scared witless of missing out on the Next Big Thing.

Plus ça Change, When It Comes To Hype

It’s easy to take pot-shots at those motivations even it it won’t make you popular. His feeling smart will last only as long as the moment he gets it that everyone else has the same thing, or perhaps until it leads him astray into a calamitous decision.  Meanwhile there’s a good chance  the magic bullet will go the way that wholesale outsourcing of software development did twenty years ago, as an over reliance on something-for-nothing work will generate far more other work to fix its problems. But while those pot-shots weaken some arguments they aren’t perhaps the crushing blows one might imagine they are. LLMs have their uses, however annoying that may be if you’re sick to death of low-value slop.

The Gartner hype cycle graph. Jeremykemp, CC BY-SA 3.0.

Perhaps more worthy of examination is the fear of missing out, because that’s a more fundamental motivation. We all want to be among the Cool Kids, Hackaday readers having the latest tech toys before everyone else are not immune to this. And when you have convinced yourself that the alternative to being one of the Cool Kids is being the commercial equivalent of a buggy-whip salesman circa 1920, it assumes an extra urgency. It’s time to look at a perennial favourite, the Gartner Hype Cycle, for inspiration. Just where on a Gartner Hype Cycle curve do you have to be, to miss out?

On the left of the graph is the steep slope towards the Peak of Inflated Expectation. This is the part we most associate with tech bubbles; as an example we might point to the dotcom boom during its most intensive period in 1997 or 1998. If you pick the moment of the peak or indeed  the downward slope towards the Trough of Disillusionment to jump in, then it’s obvious you have missed out. But how far back down the upward slope do you have to be to have not missed out? I’d contend that it’s much earlier, to use our dotcom boom analogy: if you weren’t in the game by 1996, perhaps you were too late. Transposing to the AI boom of today, has our Guy In A Suit already missed the boat without realising it?

They’re Looking At The Wrong Part Of The Graph

The pets.com sockpuppet
We had forgotten the pets.com mascot from the peak of the dotcom era. Jacob Bøtter, CC BY 2.0.

It pains me when I see people newly excited by AI in 2026 for the reasons listed above. To them they’re valid, but having lived and worked through so many other booms and subsequent crashes driven by similar ideas about those technologies I know how the next year or so will go. I think there are many other valid reasons to be excited here, but they lie elsewhere on the Gartner graph. Back to the dotcom boom, the whole thing was driven by sometimes outright crazy ideas surrounding e-commerce, yet it would be social media a decade later that would make many of the huge players we have today. Could someone have made Facebook in 1996? Possibly, but if anyone thought of it at that point, it seems they didn’t do it. If Guy In A Suit is looking for something to be excited about, he should be polishing his crystal balls and looking ahead to the right hand side of the Gartner graph in a decade’s time, not running with the herd.

Returning to my first paragraph and whether a writer will inevitably join the buggy-whip salesmen, I remain rather optimistic that they won’t. Hackaday is meat-based for good reason, but more generally I’m watching the consumer develop a hair-trigger response to slop. I’m certain that there will be a space for machine-generated content in the future whether we like it or not, but I’m equally sure that in my line at least, a human input will retain some value.

Having considered Guy In A Suit and then myself, perhaps it’s time to talk about you, the Hackaday reader. We probably have more AI-skeptics among us than can be found in the general public and I consider myself in part among them, but for all that skepticism I think we should channel it into seeking out the interesting things rather than turning our backs on it. I’ve mentioned the AI-based coding helpers as an example where our community has found some benefit, and as I’ve mentioned before I think that the ability to run a useful LLM locally on commodity hardware delivers huge potential over a cloud data-slurper. If we don’t believe in it, at least we should be like Fox Mulder, and want to believe.

Where are you on that continuum?

Are We Surrendering Our Thinking To Machines?

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.

Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.

The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.

Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.

Thanks to [Monika] for the tip!

Espressif’s New ESP32-S31: Dual-Core RISC-V With WiFi 6 And GBit Ethernet

In a move that’s no doubt going to upset and confuse many, Espressif has released its newest microcontroller — the ESP32-S31. The confusing part here is that the ESP32-S series was always the one based on Tensilica Xtensa LX7 cores, while the ESP32-C series was the one using RISC-V cores.

That said, if one looks at it as a beefier -S3 MCU it does have some appealing upgrades. The most obvious improvements are with the use of WiFi 6, as well as Bluetooth Classic and LE 5.4, including LE Audio. There is also Thread and Zigbee support for those who are into such things.

The Ethernet MAC got a bump from the 100 Mbit RMII MAC in previous MCUs and is now gigabit-rated, while the number of GPIO is significantly higher at 60 instead of 45 on the -S3. On the RAM side, things are mostly the same, except for DDR PSRAM support, with octal SPI offering up to 250 MHz compared to 80 MHz on the -S3.

On the CPU side the up-to-320 MHz RISC-V cores are likely to be about as powerful as the 240 MHz LX7 cores in the -S3, based on the ESP32-C series performance in terms of IPC. Overall it does seem like a pretty nice MCU, it’s just confusing that it doesn’t use LX7 cores with the series it was put into. When this MCU will be available for sale doesn’t seem to be known yet, with only samples available to select customers.

A prototype VLIW computer made by Multiflow

A History On The “Impossible” VLIW Computing

A computer does one thing at a time, even if it feels like it’s doing multiple things at once. In reality, it’s just switching between tasks very quickly. But a VLIW (Very Long Instruction Word) computer is different. Today, [Asianometry] tells us about VLIW computing and its history.

Processors have multiple functional units; for example, you might have separate units each for addition, multiplication and division. But because it runs one instruction at a time, these units tend to spend a large amount of time idle. VLIW aims to address this inefficiency by reinventing what an instruction means. Instead of telling the whole processor what to do, a VLIW instruction tells each functional unit what to do at once. Sounds good, right? Well, that was the easy part.

The hard part? How to compile a program for a VLIW computer, that can actually make use of all the functional units at once; after all, the efficiency promise is that the higher activity makes up for larger instruction words to fetch. That is the compiler’s job; VLIW compilers try to reschedule the operations in the program to convert sequential code into more parallel operations then compiled into the titular very long instruction words.

[Asianometry] goes into detail about this, the history, and more in the video after the break.
Continue reading “A History On The “Impossible” VLIW Computing”

Intel 486 Support Likely To Be Removed In Linux 7.1

Although everyone’s favorite Linux overlord [Linus Torvalds] has been musing on dropping Intel 486 support for a while now, it would seem that this time now has finally come. In a Linux patch submitted by [Ingo Molnar] the first concrete step is taken by removing support for i486 in the build system. With this patch now accepted into the ‘tip’ branch, this means that no i486-compatible image can be built any more as it works its way into the release branches, starting with kernel 7.1.

No mainstream Linux distribution currently supports the 486 CPU, so the impact should be minimal, and there has been plenty of warning. We covered the topic back in 2022 when [Linus] first floated the idea, as well as in 2025 when more mutterings from the side of [Linus] were heard, but no exact date was offered until now.

It remains to be seen whether 2026 is really the year when Linux says farewell to the Intel 486 after doing so for the Intel 386 back in 2012. We cannot really imagine that there’s a lot of interest in running modern Linux kernels on CPUs that are probably older than the average Hackaday reader, but we could be mistaken.

Meanwhile, we got people modding Windows XP to be able to run on the Intel 486, opening the prospect that modern Windows might make it onto these systems instead of Linux in the ultimate twist of irony.

Hear Ye, Hear Ye! The Magic Of The Scroll-Like Phone Which Wast Not!

When LG left the smartphone market, quite a number of strange devices were left behind. While some, like the Wing, made it to consumers, others did not. The strangest of these would have to be their rollable phone concept; a device which would expand by unrolling a portion of the screen like a scroll. This never made it to market, but one managed to make its way to [JerryRigEverything’s] workbench, and we are fortunate enough to see the insides of this strange device. 

There are a few interesting tidbits about the device before even entering the device. Very clearly this phone was ready to be sold, with a tidy user interface for expanding the display, and even animated wallpapers which expand with it. The display, when rolled onto the back of the device, sits behind a glass cover to keep it protected from debris, and can be used to take selfies with the larger sensors of the rear facing cameras. You can also see a bit of the track that the screen rolls on, hinting at what lies inside.

Continue reading “Hear Ye, Hear Ye! The Magic Of The Scroll-Like Phone Which Wast Not!”

TinyGo Boldly Goes Where No Go Ever Did Go Before

When you’re programming microcontrollers, you’re likely to think in C if you’re old-school, Rust if you’re trendy, or Python if you want it done quick and have resources to spare. What about Go? The programming language, not the game. That’s an option, too, with TinyGo now supporting over 100 different dev boards, along with webASM.

We covered TinyGo back in 2019, but they were just getting started at that point, targeting the Arduino and BBC:micro boards. They’ve grown that list to include everything from most of Adafruit’s fruitful suite of offerings, ESP32s, and even the Nintendo Game Boy Advance. So now you can go program go in Go so you can play go on the go.

The biggest drawback–which is going to be an absolute dealkiller for a lot of applications–is a lack of wireless connectivity support. Claiming to support the ESP8266 while not allowing one to use wifi is a bit of a stretch, considering that’s the whole raison d’être of that particular chip, but it’s usable as a regular microcontroller at least.

They’ve now implemented garbage collection, a selling point for those who like Go, but admit it’s slower in TinyGo compared to its larger cousin and won’t work on AVR chips or in WebAssembly. It’s still not complete Go, however, so just as we reported in 2019, you won’t be able to compile all the standard library packages you might be used to. There are more of them than there were, so progress has been made!

Still, knowing how people get about programming languages, this will please the Go fanatics out there. Others might prefer to go FORTH and program their Arduinos, or to wear out their parentheses keys with LISP. The more the merrier, we say!