Are We Surrendering Our Thinking To Machines?

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.

Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.

The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.

Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.

Thanks to [Monika] for the tip!

Hackaday Links Column Banner

Hackaday Links: May 25, 2025

Have you heard that author Andy Weir has a new book coming out? Very exciting, we know, and according to a syndicated reading list for Summer 2025, it’s called The Last Algorithm, and it’s a tale of a programmer who discovers a dark and dangerous secret about artificial intelligence. If that seems a little out of sync with his usual space-hacking fare such as The Martian and Project Hail Mary, that’s because the book doesn’t exist, and neither do most of the other books on the list.

The list was published in a 64-page supplement that ran in major US newspapers like the Chicago Sun-Times and the Philadelphia Inquirer. The feature listed fifteen must-read books, only five of which exist, and it’s no surprise that AI is to behind the muck-up. Writer Marco Buscaglia took the blame, saying that he used an LLM to produce the list without checking the results. Nobody else in the editorial chain appears to have reviewed the list either, resulting in the hallucination getting published. Readers are understandably upset about this, but for our part, we’re just bummed that Andy doesn’t have a new book coming out.

Continue reading “Hackaday Links: May 25, 2025”

Cuban Embassy Attacks And The Microwave Auditory Effect

If you’ve been paying attention to the news, you may have seen a series of articles coming out about US staffers in Cuba. It seems that 21 staffers have suffered a bizarre array of injuries ranging from hearing loss to dizziness to concussion-like traumatic brain injuries. Some staffers have reported hearing incapacitating sounds in the embassy and in their hotel rooms. The reports range from clicking to grinding, humming, or even blaring sounds. One staffer described being awoken to a horrifically loud sound, only to have it disappear as soon as he moved away from his bed. When he got back into bed, the mysterious sound came back.

Cuba has denied any wrongdoing. However, the US has already started to take action – expelling two Cuban diplomats from the US in May. The question though is what exactly could have caused these injuries. The press has gone wild with theories of sonic weaponry, hidden bugs, and electronic devices, poisons, you name it. Even Julian Assange has weighed in, stating “The diversity of symptoms suggests that this is a pathogen combined with paranoia in an isolated diplomatic corps.”

So what’s going on? Bizarre accidents? Cloak and dagger gone awry? Mass hysteria among the US state department, or something else entirely? Continue reading “Cuban Embassy Attacks And The Microwave Auditory Effect”

Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

LED Goggles Make You Trip Out?

Who knows if this works and should you really want to try to induce hallucinations by flashing colors in front of your eyes? But we do love the zaniness of the project. [Everett’s] homemade hallucination goggles come in two flavors, the small swimming-goggle-type model and the heavy-duty trip visor made from welder’s goggles. Each brings together the same components; a half ping-pong ball for each eye to diffuse the light from an RGB LED. The system is controlled by an Arduino with some buttons and 7-segment displays for a user interface. Put this together with some homemade EL wire and you’re ready for Burning Man.
[Thanks Evan]