Reverse-Engineering Human Cognition And Decision Making In A Modern Age

Cognitive processes are not something that we generally pay much attention to until something goes wrong, but they cover the entire scope of us ingesting sensory information, the processing and recalling thereof, as well as any resulting decisions made based on such internal deliberation.

Within that context there has also long been a struggle between those who feel that it’s fine for humans to rely on available technologies to make tasks like information recall and calculations easier, and those who insist that a human should be perfectly capable of doing such tasks without any assistance. Plato argued that reading and writing hurt our ability to memorize, and for the longest time it was deemed inappropriate for students to even consider taking one of those newfangled digital calculators into an exam, while now we have many arguing that using an ‘AI’ is the equivalent of using a calculator.

At the root of this conundrum lies the distinction between that which enhances and that which hampers human cognition. When does one merely offload tasks to a device or object, and when does one harm one’s own cognition?

Continue reading “Reverse-Engineering Human Cognition And Decision Making In A Modern Age”

Skylab Under The Ocean

A crew lives on a station in a hostile environment. Leaving that environment requires oxygen tanks and specialized gear to deal with pressure differentials. A space station? Nah. A base built on the ocean floor. The US Navy was interested in such a base in the 1960s, and bases like this are a staple of science fiction. But today, we see more space stations than underwater bases. Have you ever wondered why?

Diving deep underwater is a tricky business. At a certain depth, the pressure forces gas like nitrogen to dissolve into your body. By itself, this isn’t a problem, but when you ascend, it is a big problem. If the gas all comes out at the same time, you get bubbles, which can cause decompression sickness, commonly called the bends. The exact problems vary, but the bends often cause extreme joint pain, fatigue, or a rash. Sometimes people die.

While you think of the bends as a deep-sea diver’s problem, it can also happen in airplanes and outer space. Any time you go from high pressure to low pressure quickly, you are subject to decompression sickness. Depending on what you are doing, there are different ways to mitigate the problem. For diving, traditionally, you simply don’t surface too quickly.

You dive, do your work, and then head towards the surface, stopping at preset stops to let the pressure equalize gradually. Physics is a bear, though. The longer you stay at a given depth, the longer you have to decompress.

That means you rapidly reach a point of diminishing returns. Suppose you dive to the ocean floor. You spend an hour working. Then you have to spend, say, eight hours gradually rising to the surface. That makes extended operations at significant depth impractical.

George Bond was thinking about all this and had an interesting idea. It is true that, in general, the longer you stay down, the more gas your body absorbs. But it is also true that, eventually, your tissues saturate, and then you don’t absorb any more.

Continue reading “Skylab Under The Ocean”

This Week In Security: Flatpak Fixes, Android Malware, And SCADA Was IOT Before IOT Was Cool

Rowhammer attacks have been around since 2014, and mitigations are in place in most modern systems, but the team at gddr6.fail has found ways to apply the attack to current-generation GPUs.

Rowhammer attacks attach the electrical characteristics of RAM, using manipulation of the contents of RAM to cause changes in the contents of adjacent memory cells. Bit values are just voltage levels, after all, and if a little charge leaks across from one row to the next, you can potentially pull a bit high by writing repeatedly to its physical neighbors.

The attack was used to allow privilege escalation by manipulating the RAM defining the user data, and later, to allow reading and manipulation of any page in ram by modifying the system page table that maps memory and memory permissions. By 2015 researchers refined the attack to run in pure JavaScript against browsers, and in 2016 mobile devices were shown to be vulnerable. Mitigations have been put in place in physical memory design, CPU design, and in software. However, new attack vectors are still discovered regularly, with DDR4 and DDR5 RAM as well as AMD and RISC-V CPUs being vulnerable.

The GDDR6-Fail attack targets the video ram of modern graphics cards, and is able to trigger similar vulnerabilities in the graphics card itself, culminating in accessing and changing the memory of the PC via the PCI bus and bypassing protections.

For users who fear they are at risk — most likely larger AI customers or shared hosting environments where the code running on the GPU may belong to untrusted users — enabling error correcting (ECC) mode in the GPU reduces the amount of available RAM, but adds protection by performing checksums on the memory to detect corruption or bit flipping. For the average home user, your mileage may vary – there’s certainly easier ways to execute arbitrary code on your PC – like whatever application is running graphics in the first place!

Continue reading “This Week In Security: Flatpak Fixes, Android Malware, And SCADA Was IOT Before IOT Was Cool”

TurboQuant: Reducing LLM Memory Usage With Vector Quantization

Large language models (LLMs) aren’t actually giant computer brains. Instead, they are massive vector spaces in which the probabilities of tokens occurring in a specific order is encoded. Billions of parameters, times N bits per parameter, equals N-billion bits of storage required for a full model. Since increasing the number of parameters makes the models appear smarter, correspondingly the size of these models and their associated caches has been increasing rapidly.

Vector quantization (VQ) is a method that can compress the vectors calculated during inference to take up less space without significant loss of data. Google’s recently published pre-print paper on TurboQuant covers an LLM-oriented VQ algorithm that’s claimed to provide up to a 6x compression level with no negative impact on inference times.

The tokens aren’t directly encoded in the vector space, but their associated key value is, which along with the single token per inference process creates the need for a key-value (KV) cache, the size of which scales with the size of the model. Thus by compressing the KV cache using VQ, it will reduce its size and correspondingly speed up look-ups due to the smaller size in memory. One catch here is that VQ is due to the nature of quantization some accuracy will be lost. The trick here is thus to apply VQ in such a way that it does not affect this accuracy in a noticeable manner.

Other aspects that had to be taken into account by the TurboQuant algorithm was fast computation to keep up with real-time requirements, along with compatibility with so-called ‘AI accelerator’ hardware.

Continue reading “TurboQuant: Reducing LLM Memory Usage With Vector Quantization”

AI For The Skeptics: Pick Your Reasons To Be Excited

It’s odd being a technology writer in 2026, because around you are many people who will tell you that your craft is outdated. Like the manufacturers of buggy-whips at the turn of the twentieth century, the automobile (in the form of large language model AI) is on the market, and your business will soon be an anachronism. Adapt or go extinct, they tell you. It’s an argument I’ve found myself facing a few times over the last year in my wandering existence, and it’s forced me to think about it. What are the reasons everyone is excited about AI and are those reasons valid, what is there to be scared of, and what are the real reasons people should be excited about it?

If We Gotta Take This Seriously, How Can We Do It?

A couple in a horse drawn buggy, circa 1900ish
The futures looking bright in the buggy-whip department! Public domain.

I’ll start by repeating my tale from a few weeks ago when I asked readers what AI applications would survive when the hype is over. The reaction of a friend with decades of software experience on trying an AI coding helper stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile. I agree with her that this has the potential to be a transformative technology, and while it’s entertaining to make fun of its shortcomings as I did three years ago when the idea of what we now call vibe coding first appeared, it’s already making itself useful in some applications. Simply dismissing it is no longer appropriate, but equally, drinking freely of the Kool-Aid seems like joining yet another hype bandwagon that will inevitably derail. A middle way has to be found. Continue reading “AI For The Skeptics: Pick Your Reasons To Be Excited”

2026 Hackaday Europe: First Round Of Speakers Announced!

Hackaday Europe is the continental version of the Ultimate Hardware Conference, taking place May 16th and 17th, and you need to be there! We’ll continue to announce speakers and workshops over the next couple weeks, because we got so many more great talks than we had anticipated that we’re negotiating for extra time.

This year, we’re moving to a new venue in Lecco, Italy, and it’s sure to be fantastic. Get your tickets now before it’s too late. And stay tuned for another round of talk reveals next week!

Continue reading “2026 Hackaday Europe: First Round Of Speakers Announced!”

CCA Ethernet Cables: Not Up To Scratch, But Are They Dangerous?

If you’ve ever bought a suspiciously cheap Ethernet cable from an online listing, there’s a decent chance you’ve encountered Copper Clad Aluminum. Better known as CCA, it’s exactly what it sounds like—an aluminium conductor with a thin skin of copper deposited on the outside. Externally, cables made with this material look largely like any other, with perhaps the only obvious tell being that they feel somewhat lighter in the hand.

CCA is cheaper than proper copper cabling, and it conducts signals well enough to function in an Ethernet cable. And yet, it’s a prime example of corner-cutting that keeps standards bodies and professional installers up at night. But just how dangerous is this silent scourge, found lurking in so many network cabinets around the world?

Not Up To Scratch

CCA wire is typically made by wrapping an aluminium core with copper strip and then extruding it through a die. Credit: USPTO

Everything you need to know about CCA is in the name—it refers to an aluminium wire with a thin copper cladding, typically applied through a die extrusion process. The reasoning behind this exploits a real physical phenomenon called the skin effect, wherein higher-frequency AC signals tend to travel along the outer surface of a conductor. The idea goes that since most of the current moves through the outer copper skin layer anyway, the less-conductive aluminium core doesn’t unduly impact the wire’s performance. Using copper-clad aluminium wiring is, in theory, desirable because aluminium is much cheaper than copper, which can really add up over long cable runs. Imagine you’re wiring a building with with hundreds of miles of Ethernet cabling, all with eight conductors each—the savings add up pretty quickly.

There’s a problem with CCA cabling in these contexts, though. Due to prevailing cabling standards, any cable made with CCA is technically not even a real Ethernet cable at all. The relevant documents are unambiguous.

ANSI/TIA-568.2-D requires conductors in Category-rated cable to be solid or stranded copper. No other materials are acceptable, and thus CCA is explicitly excluded from use in Category cable applications. A cable with CCA conductors cannot legitimately carry a Cat5e, Cat6, or any related designation under any circumstances. Similarly, ISO/IEC 11801 has the same requirement. The U.S. National Electrical Code also states that conductors in communications cables, other than coaxial cable, shall be copper. This isn’t a suggestion or a best practice; it’s the letter of the code. Anything lesser is simply not allowed. Continue reading “CCA Ethernet Cables: Not Up To Scratch, But Are They Dangerous?”