AI For The Skeptics: Pick Your Reasons To Be Excited

It’s odd being a technology writer in 2026, because around you are many people who will tell you that your craft is outdated. Like the manufacturers of buggy-whips at the turn of the twentieth century, the automobile (in the form of large language model AI) is on the market, and your business will soon be an anachronism. Adapt or go extinct, they tell you. It’s an argument I’ve found myself facing a few times over the last year in my wandering existence, and it’s forced me to think about it. What are the reasons everyone is excited about AI and are those reasons valid, what is there to be scared of, and what are the real reasons people should be excited about it?

If We Gotta Take This Seriously, How Can We Do It?

A couple in a horse drawn buggy, circa 1900ish
The futures looking bright in the buggy-whip department! Public domain.

I’ll start by repeating my tale from a few weeks ago when I asked readers what AI applications would survive when the hype is over. The reaction of a friend with decades of software experience on trying an AI coding helper stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile. I agree with her that this has the potential to be a transformative technology, and while it’s entertaining to make fun of its shortcomings as I did three years ago when the idea of what we now call vibe coding first appeared, it’s already making itself useful in some applications. Simply dismissing it is no longer appropriate, but equally, drinking freely of the Kool-Aid seems like joining yet another hype bandwagon that will inevitably derail. A middle way has to be found. Continue reading “AI For The Skeptics: Pick Your Reasons To Be Excited”

Are We Surrendering Our Thinking To Machines?

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.

Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.

The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.

Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.

Thanks to [Monika] for the tip!

The Heat Island Effect Is Warming Up The AI Data Center Controversy

There’s been a lot of virtual ink spilled in environmental circles about the cooling water requirements of data centers, but less consideration of what happens with all the heat coming out of these buildings. Naturally, it’s going to warm the surrounding environment, but how much? Around 2 C (3.6 F) on average, and potentially much more than that, according to a recent study on the data heat island effect.

It’s common sense, of course: heat removed from the data center doesn’t go away. That heat might go into a body of water if one is available, but otherwise it’s out into the atmosphere to warm up everybody else’s day. In some places — like a Canadian winter — that might not be so bad. In others, where climate change and urban heat islands are cranking up the summertime temperatures, it very much could be. Especially if you’re in the worst-case scenario micro-climate described by the paper, which saw a predicted increase of 9.1 C (16 F).

Now, these results are theoretical and need to be ground-truthed, but anyone who has huddled next to the air-exchange unit of a large building for warmth knows there’s something to them. Unfortunately there don’t seem to be before-and-after measurements available for existing data-centers — AI or otherwise — to show exactly what their heat output is doing in the real world, but the urban heat island effect from all the dark asphalt in our cities is well known. Cooling paint and green roofs can help with that, but they won’t do much for the megawatts being pumped out to keep your cousin’s AI girlfriend online.

Some would argue that all this heat wouldn’t be a problem if we could launch the data centers outside the environment — just have a care the front doesn’t fall off.


Image of data center cooling by Анна from Pixabay

Despite Penalties, Lawyers Can’t Stop Using AI

Despite a few high-profile cases in recent years with lawyers getting caught using LLM-generated documents and facing disciplinary action due to this, it would seem that this is not deterring many other lawyers from following them off this particular cliff, per reporting from NPR.

We reported back in the innocent days of 2023 about the amusing case of Robert Mata v. Avianca, Inc. In this case, the plaintiff’s lawyer decided to have ChatGPT ‘assist’ with the legal filing, which ended up being filled with non-existent cases being cited, despite the chatbot’s assurance that these were all real cases. Now it would seem that this blind trust in cases cited by LLM chatbots is becoming the rule, rather than the exception.

Last year a record number of lawyers fell into the same trap, with many lawyers getting fined thousands of dollars for confabulated case citations. According to a researcher at the business school HEC Paris, who is keeping a worldwide tally, the count so far is 1,200, of which 800 originate from US courts.

Unsurprisingly, penalties are also increasing in severity, with monetary penalties passing the $100,000 and some courts demanding that any use of ‘AI’ be declared up-front. Whether or not the popularity of LLM chatbots among US lawyers is simply due to the massive caseload that digging through cases in Common Law legal systems entails has not yet been addressed, but that undesirable shortcuts are being taken is undeniable.

Remember that it’s easy to point and laugh, but the next case could involve the lawyer handling your delicate situation.

DC In The Data Center For A More Efficient Future

If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.

A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.

The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.

It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.


header image: Christopher Bowns, CC BY-SA 2.0.

Ask Hackaday: Using CoPilot? Are You Entertained?

There’s a great debate these days about what the current crop of AI chatbots should and shouldn’t do for you. We aren’t wise enough to know the answer, but we were interested in hearing what is, apparently, Microsoft’s take on it. Looking at their terms of service for Copilot, we read in the original bold:

Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.

While that’s good advice, we are pretty sure we’ve seen people use LLMs, including Copilot, for decidedly non-entertaining tasks. But, at least for now, if you are using Copilot for non-entertainment purposes, you are violating the terms of service.

Continue reading “Ask Hackaday: Using CoPilot? Are You Entertained?”

Repurposing Old AMD APUs For AI Work

The BC250 is what AMD calls an APU, or Accelerated Processing Unit. It combines a GPU and CPU into a single unit, and was originally built to serve as the heart of certain Samsung rack mount servers. If you know where to find cheap surplus units of the BC250, you can put them to good use for AI work, as [akandr] demonstrates.

The first thing you’ll have to figure out is how to take an individual BC250 APU and get it up and running. It’s effectively a full system-on-chip, combining a Zen 2 CPU with a Cyan Skillfish RDNA 1.5 GPU. However, it was originally intended to run inside a rackmount server unit rather than a standalone machine. To get it going, you’ll need to hook it up with power and some kind of cooling solution.

From there, it’s a matter of software. [akandr] explains how to get AI workflows running on the BC250 using Ollama and Vulkan, while noting useful hacks to improve performance like disabling the GUI and tweaking the CPU governor. The hardware can be used with a wide range of different models depending on what you’re trying to achieve, it just takes some careful management of the APU’s resources to get the most out of it. Thankfully, that’s all in the guide on GitHub.

We’ve already seen these AMD APUs repurposed before for gaming use. Unfortunately the word is out already  about their capabilities, so prices have risen significantly in response to demand. Still, if you manage to score a BC250 and do something cool with it yourself, be sure to let us know on the tipsline!