The Freedom To Fail

When you think of NASA, you think of high-stakes, high-cost, high-pressure engineering, and maybe the accompanying red tape. In comparison, the hobby hacker has a tremendous latitude to mess up, dream big, and generally follow one’s bliss. Hopefully you’ll take some notes. And as always with polar extremes, the really fertile ground lies in the middle.

[Dan Maloney] and I were thinking about this yesterday while discussing the 50th flight of Ingenuity, the Mars helicopter. Ingenuity is a tech demo, carrying nothing mission critical, but just trying to figure out if you could fly around on Mars. It was planned to run for five flights, and now it’s done 50.

The last big tech demo was the Sojourner Rover. It was a small robotic vehicle the size of a microwave oven that they hoped would last seven days. It went for 85, and it gave NASA the first taste of success it needed to follow on with 20 years of Martian rovers.

Both of these projects were cheap, by NASA standards, and because they were technical demonstrators, the development teams were allowed significantly more design freedom, again by NASA standards.

None of this compares to the “heck I’ll just hot-air an op-amp off an old project” of weekend hacking around here, but I absolutely believe that a part of the tremendous success of both Sojourner and Ingenuity were due to the risks that the development teams were allowed to take. Creativity and successful design thrives on the right blend of constraint and freedom.

Will Ingenuity give birth to a long series of flying planetary rovers as Sojourner did for her rocker-bogie based descendants? Too early to tell. But I certainly hope that someone within NASA is noticing the high impact that these technical demonstrator projects have, and also noting why. The addition of a little bit of hacker spirit to match NASA’s professionalism probably goes a long way.

Sufficiently Advanced Tech: Has Bugs

Arthur C. Clarke said that “Any sufficiently advanced technology is indistinguishable from magic”. He was a sci-fi writer, though, and not a security guy. Maybe it should read “Any sufficiently advanced tech has security flaws”. Because this is the story of breaking into a car through its headlight.

In a marvelous writeup, half-story, half CAN-bus masterclass, [Ken Tindell] details how car thieves pried off the front headlight of a friend’s Toyota, and managed to steal it just by saying the right things into the network. Since the headlight is on the same network as the door locks, pulling out the bulb and sending the “open the door” message repeatedly, along with a lot of other commands to essentially jam some other security features, can pull it off.

Half of you are asking what this has to do with Arthur C. Clarke, and the other half are probably asking what a lightbulb is doing on a car’s data network. In principle, it’s a great idea to have all of the electronics in a car be smart electronics, reporting their status back to the central computer. It’s how we know when our lights are out, or what our tire pressure is, from the driver’s seat. But adding features adds attack surfaces. What seems like magic to the driver looks like a gold mine to the attacker, or to car thieves.

With automotive CAN, security was kind of a second thought, and I don’t mean this uncharitably. The first goal was making sure that the system worked across all auto manufacturers and parts suppliers, and that’s tricky enough. Security would have to come second. And more modern cars have their CAN networks encrypted now, adding layers of magic on top of magic.

But I’m nearly certain that, when deciding to replace the simple current-sensing test of whether a bulb was burnt out, the engineers probably didn’t have the full cost of moving the bulb onto the CAN bus in mind. They certainly had dreams of simplifying the wiring harness, and of bringing the lowly headlight into the modern age, but I’d bet they had no idea that folks were going to use the headlight port to open the doors. Sufficiently advanced tech.

Design For People

We all make things. Sometimes we make things for ourselves, sometimes for the broader hacker community, and sometimes we make things for normal folks. It’s this last category where it gets tricky, and critical. I was reminded of all of this watching Chris Combs’ excellent Supercon 2022 talk on how to make it as an artist.

“But I’m not making art!” I hear you say? About half of Chris’ talk is about how he makes his tech art worry-free for galleries to install, and that essentially means making it normie-proof – making sure it runs as soon as the power is turned on, day in, day out, without hacker intervention, because venues hate having you on site to debug. As Tom joked in the podcast, it’s a little bit like designing for space: it’s a strange environment, you can’t send out repair teams, and it has to have failsafes that make sure it works.

What is striking about the talk is that there is a common core of practices that make our hardware projects more reliable, whatever their destination. Things like having a watchdog that’ll reboot if it goes wrong, designing for modularity whenever possible, building in hanging or mounting options if that’s relevant, and writing up at least a simple, single-page info sheet with everything that you need to know to keep it running. Of course with art, aesthetics matters more than usual. Or does it?

So suppose you’re making a thing for a normal person, that must run without your babysitting. What is the common core of precautionary design steps you take?

Hackaday Does Berlin

If you’re wondering why there was no newsletter last weekend, it was because we had our hands full with Hackaday Berlin. But boy, was it worth it! Besides being the launch party for the tenth annual Hackaday Prize, it was the first Hackaday gathering in Europe for four years, and it was awesome to see a bunch of familiar faces and meet many more new ones.

In a world that’s so interconnected, you might think that social media can take care of it all for you. And to some extent that’s true! If I could count the number of times I heard “I follow you on Twitter/Mastodon” over the course of the event!

But then there were tons of other meetings. People who are all interested in building and designing analog synthesizers, even some who live in the same urban megalopolis, meeting each other and talking about modules and designs. People who love flip dots. On the spot collaborations of people writing video drivers and people making huge LED walls. And somehow there’s still room for this to happen, even though the algorithms should have probably hooked these folks up by now.

From the perspective of hosting the conference, I get the most satisfaction from seeing these chance meetings and the general atmosphere of people learning not only new things, but new people. This cross-fertilization of friendships and project collaborations is what keeps our community vital, and especially coming out of the Pandemic Years, it’s absolutely necessary. I came away with a long list of new plans, and I’m sure everyone else did too. And for some reason, social media just isn’t a substitute. Take that, TwitFace!

Study Hacker History, And Update It

Looking through past hacks is a great source of inspiration. This week, we saw [Russ Maschmeyer] re-visiting a classic hack by [Jonny Lee] that made use of a Wiimote’s IR camera to fake 3D, or at least provide a compelling parallax effect that’ll fool your brain, without any expensive custom hardware.

[Lee]’s original demo was stunning, and that alone is reason to revisit it. Using the Wiimote as the webcam was inspired back in 2007, because it meant that there was no hard computer vision work to be done in estimating the viewer’s position – the camera only sees IR LEDs anyway. The tradeoff is that you had to wear two IR LEDs on your head, calibrate it just right, and that only the person with the headset on gets the illusion just right.

This is why re-visiting the past can be fruitful. As [Russ] discovered, computing power is so plentiful these days that you could do face/eye position estimation with a normal webcam easier than you could source an old Wiimote. Indeed, he’s getting the positioning so accurate that he’s worried about to which eye he’s projecting the illusion. Clearly, it’s time for a revamp.

So here’s the formula: find a brilliant old hack, and notice if it was hampered by the state of technology back when it was done. Update this using modern conveniences, and voila! You might just find that you can take the idea further, simply because you have more tools in your toolbox. Nothing wrong with standing on the shoulders of giants.

But beware! Time isn’t sitting still for you either. As soon as you make your killer 3D vision hack, VR goggles will become cheap and ubiquitous. So get it done today, before your hack becomes inspiration for the future.

Computers For Fun

The last couple years have seen an incredible flourishing of the cyberdeck scene, and probably for about as many reasons as there are individual ’deck designs. Some people get really into the prop-making, some into scrapping old tech or reusing a particularly appealing case, and others simply into the customization possibilities. That’s awesome, and they’re all different motivations for making a computer that’s truly your own.

But I really like the motivation and sentiment behind [Andreas Eriksen]’s PotatoP. (Assuming that his real motivation isn’t all the bad potato puns.) This is a small microcomputer that’s built on a commonly available microcontroller, so it’s not a particularly powerful beast – hence the “potato”. But what makes up for that in my mind is that it’s running a rudimentary bare-metal OS of his own writing. It’s like he’s taken the cyberdeck’s DIY aesthetic into the software as well.

What I like most about the spirit of the project is the idea of a long-term project that’s also a constant companion. Once you get past a terminal and an interpreter – [Andreas] is using LISP for both – everything else consists of small projects that you can check off one by one, that maybe don’t take forever, and that are limited in complexity by the hardware you’re working on. A simple text editor, some graphics primitives, maybe a sound subsystem. A way to read and write files in flash. I don’t love LISP personally, but I love that it brings interactivity and independence from an external compiler, making the it possible to develop the system on the system, pulling itself up by its own bootstraps.

Pretty soon, you could have something capable, and completely DIY. But it doesn’t need to be done all at once either. With a light enough computer, and a good basic foundation, you could keep it in your backpack and play “OS development” whenever you’ve got the free time. A DIY play OS for a sandbox computing platform: what more could a nerd want?

ChatGPT, Bing, And The Upcoming Security Apocalypse

Most security professionals will tell you that it’s a lot easier to attack code systems than it is to defend them, and that this is especially true for large systems. The white hat’s job is to secure each and every point of contact, while the black hat’s goal is to find just one that’s insecure.

Whether black hat or white hat, it also helps a lot to know how the system works and exactly what it’s doing. When you’ve got the source code, either because it’s open-source, or because you’re working inside the company that makes the software, you’ve got a huge advantage both in finding bugs and in fixing them. In the case of closed-source software, the white hats arguably have the offsetting advantage that they at least can see the source code, and peek inside the black box, while the attackers cannot.

Still, if you look at the number of security issues raised weekly, it’s clear that even in the case of closed-source software, where the defenders should have the largest advantage, that offense is a lot easier than defense.

So now put yourself in the shoes of the poor folks who are going to try to secure large language models like ChatGPT, the new Bing, or Google’s soon-to-be-released Bard. They don’t understand their machines. Of course they know how the work inside, in the sense of cross multiplying tensors and updating weights based on training sets and so on. But because the billions of internal parameters interact in incomprehensible ways, almost all researchers refer to large language models’ inner workings as a black box.

And they haven’t even begun to consider security yet. They’re still worried about how to construct obscure background prompts that prevent their machines from spewing hate speech or pornographic novels. But as soon as the machines start doing something more interesting than just providing you plain text, the black hats will take notice, and someone will have to figure out defense.

Indeed, this week, we saw the first real shot across the bow: a hack to make Bing direct users to arbitrary (bad) webpages. The Bing hack requires the user to already be on a compromised website, so it’s maybe not very threatening, but it points out a possible real security difference between Bing and ChatGPT: Bing gives you links to follow, and that makes it a juicy target.

We’re right on the edge of a new security landscape, because even the white hats are facing a black box in the AI. So far, what ChatGPT and Codex and other large language models are doing is trivially secure – putting out plain text – but Bing is taking the first dangerous steps into doing something more useful, both for users and black hats. Given the ease with which people have undone OpenAI’s attempts to keep ChatGPT in its comfort zone, my guess is that the white hats will have their hands full, and the black-box nature of the model deprives them of their best hope. Buckle your seatbelts.