Hackaday Links: February 23, 2025

Hackaday Links Column Banner

Ho-hum — another week, another high-profile bricking. In a move anyone could see coming, Humane has announced that their pricey AI Pin widgets will cease to work in any meaningful way as of noon on February 28. The company made a splash when it launched its wearable assistant in April of 2024, and from an engineering point of view, it was pretty cool. Meant to be worn on one’s shirt, it had a little bit of a Star Trek: The Next Generation comm badge vibe as the primary UI was accessed through tapping the front of the thing. It also had a display that projected information onto your hand, plus the usual array of sensors and cameras which no doubt provided a rich stream of user data. Somehow, though, Humane wasn’t able to make the numbers work out, and as a result they’ll be shutting down their servers at the end of the month, with refunds offered only to users who bought their AI Pins in the last 90 days.

How exactly Humane thought that offering what amounts to a civilian badge cam was going to be a viable business model is a bit of a mystery. Were people really going to be OK walking into a meeting where Pin-wearing coworkers could be recording everything they say? Wouldn’t wearing a device like that in a gym locker room cause a stir? Sure, the AI Pin was a little less obtrusive than something like the Google Glass — not to mention a lot less goofy — but all wearables seem to suffer the same basic problem: they’re too obvious. About the only one that comes close to passing that hurdle is the Meta Ray-Ban smart glasses, and those still have the problem of obvious cameras built into their chunky frames. Plus, who can wear Ray-Bans all the time without looking like a tool?

Good news for everyone worried about a world being run by LLMs and chatbots. It looks like all we’re going to have to do is wait them out, if a study finding that older LLMs are already showing signs of cognitive decline pans out. To come to that conclusion, researchers gave the Montreal Cognitive Assessment test to a bunch of different chatbots. The test uses simple questions to screen for early signs of impairment; some of the questions seem like something from a field sobriety test, and for good reason. Alas for the tested chatbots, the general trend was that the older the model, the poorer they did on the test. The obvious objection here is that the researchers aren’t comparing each model’s current score with results from when the model was “younger,” but that’s pretty much what happens when the test is used for humans.

You’ve got to feel sorry for astronomers. Between light pollution cluttering up the sky and an explosion in radio frequency interference, astronomers face observational challenges across the spectrum. These challenges are why astronomers prize areas like dark sky reserves, where light pollution is kept to a minimum, and radio quiet zones, which do the same for the RF part of the spectrum. Still, it’s a busy world, and noise always seems to find a way to leak into these zones. A case in point is the recent discovery that TV signals that had been plaguing the Murchison Wide-field Array in Western Australia for five years were actually bouncing off airplanes. The MWA is in a designated radio quiet zone, so astronomers were perplexed until someone had the bright idea to use the array’s beam-forming capabilities to trace the signal back to its source. The astronomers plan to use the method to identify and exclude other RFI getting into their quiet zone, both from terrestrial sources and from the many satellites whizzing overhead.

And finally, most of us are more comfortable posting our successes online than our failures, and for obvious reasons. Everyone loves a winner, after all, and admitting our failures publicly can be difficult. But Daniel Dakhno finds value in his failures, to the point where he’s devoted a special section of his project portfolio to them. They’re right there at the bottom of the page for anyone to see, meticulously organized by project type and failure mode. Each failure assessment includes an estimate of the time it took; importantly, Daniel characterizes this as “time invested” rather than “time wasted.” When you fall down, you should pick something up, right?

2 thoughts on “Hackaday Links: February 23, 2025

  1. The LLM cognitive decline stuff just seems like pseudo-science click bait. I high doubt there is any real data that would support LLM degradation over time… Wish HaD would pick it apart rather than just passing it on. Using the excuse that the test is used incorrectly in humans to justify promoting bad science deserves a big eye roll.

  2. The older the model, the worst it performed? So, you mean they are getting better, since the newer models perform better?

    Now if the newer models would perform worse, that would be news! I would expect that to happen at some point, if they don’t find a way to stop inbreeding the models.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.