Sufficiently Advanced Tech: Has Bugs

Arthur C. Clarke said that “Any sufficiently advanced technology is indistinguishable from magic”. He was a sci-fi writer, though, and not a security guy. Maybe it should read “Any sufficiently advanced tech has security flaws”. Because this is the story of breaking into a car through its headlight.

In a marvelous writeup, half-story, half CAN-bus masterclass, [Ken Tindell] details how car thieves pried off the front headlight of a friend’s Toyota, and managed to steal it just by saying the right things into the network. Since the headlight is on the same network as the door locks, pulling out the bulb and sending the “open the door” message repeatedly, along with a lot of other commands to essentially jam some other security features, can pull it off.

Half of you are asking what this has to do with Arthur C. Clarke, and the other half are probably asking what a lightbulb is doing on a car’s data network. In principle, it’s a great idea to have all of the electronics in a car be smart electronics, reporting their status back to the central computer. It’s how we know when our lights are out, or what our tire pressure is, from the driver’s seat. But adding features adds attack surfaces. What seems like magic to the driver looks like a gold mine to the attacker, or to car thieves.

With automotive CAN, security was kind of a second thought, and I don’t mean this uncharitably. The first goal was making sure that the system worked across all auto manufacturers and parts suppliers, and that’s tricky enough. Security would have to come second. And more modern cars have their CAN networks encrypted now, adding layers of magic on top of magic.

But I’m nearly certain that, when deciding to replace the simple current-sensing test of whether a bulb was burnt out, the engineers probably didn’t have the full cost of moving the bulb onto the CAN bus in mind. They certainly had dreams of simplifying the wiring harness, and of bringing the lowly headlight into the modern age, but I’d bet they had no idea that folks were going to use the headlight port to open the doors. Sufficiently advanced tech.

Hackaday Does Berlin

If you’re wondering why there was no newsletter last weekend, it was because we had our hands full with Hackaday Berlin. But boy, was it worth it! Besides being the launch party for the tenth annual Hackaday Prize, it was the first Hackaday gathering in Europe for four years, and it was awesome to see a bunch of familiar faces and meet many more new ones.

In a world that’s so interconnected, you might think that social media can take care of it all for you. And to some extent that’s true! If I could count the number of times I heard “I follow you on Twitter/Mastodon” over the course of the event!

But then there were tons of other meetings. People who are all interested in building and designing analog synthesizers, even some who live in the same urban megalopolis, meeting each other and talking about modules and designs. People who love flip dots. On the spot collaborations of people writing video drivers and people making huge LED walls. And somehow there’s still room for this to happen, even though the algorithms should have probably hooked these folks up by now.

From the perspective of hosting the conference, I get the most satisfaction from seeing these chance meetings and the general atmosphere of people learning not only new things, but new people. This cross-fertilization of friendships and project collaborations is what keeps our community vital, and especially coming out of the Pandemic Years, it’s absolutely necessary. I came away with a long list of new plans, and I’m sure everyone else did too. And for some reason, social media just isn’t a substitute. Take that, TwitFace!

Why A Community Hackerspace Should Be A Vital Part Of Being An Engineering Student

Travelling the continent’s hackerspaces over the years, I have visited quite a few spaces located in university towns. They share a depressingly common theme, of a community hackerspace full of former students who are now technology professionals, sharing a city with a university anxious to own all the things in the technology space and actively sabotaging the things they don’t own. I’ve seen spaces made homeless by university expansion, I’ve seen universities purposefully align their own events to clash with a hackerspace open night and discourage students from joining, and in one particularly egregious instance, I’ve even seen a university take legal action against a space because they used the name of the city, also that of the university, in the name of their hackerspace. I will not mince my words here; while the former are sharp practices, the latter is truly disgusting behaviour.

The above is probably a natural extension of the relationship many universities have with their cities, which seems depressingly often to be one of othering and exclusion. Yet in the case of hackerspaces I can’t escape the conclusion that a huge opportunity is being missed for universities to connect engineering and other tech-inclined students with their alumni, enhance their real-world skills, and provide them with valuable connections to tech careers.

Yesterday I was at an event organised by my alma mater, part of a group of alumni talking to them about our careers.  At the event I was speaking alongside an array of people with varying careers probably more glittering than mine, but one thing that came through was that this was something of a rare opportunity for many of the students, to talk to someone outside the university bubble. Yet here were a group of engineers, many of whom had interesting careers based locally, and in cases were even actively hiring. If only there were a place where these two groups could informally meet and get to know each other, a community based on a shared interest in technology, perhaps?

It’s not as though universities haven’t tried on the hackerspace front, but I’m sad to say that when they fill a room with cool machines for the students they’re rather missing the point. In some of the cases I mentioned above the desire to own all the things with their own students-only hackerspace was the thing that led to the community hackerspaces being sabotaged. Attractive as they are, there’s an important ingredient missing, they come from a belief that a hackerspace is about its facilities rather than its community. If you were to look at a room full of brand-new machines and compare it with a similar room containing a temperamental Chinese laser cutter and a pair of battered 3D printers, but alongside a group of seasoned engineers in an informal setting, which would you consider to be of more benefit to a student engineer? It should not be a difficult conclusion to make.

Universities value their local tech industry, particularly that which has some connection to your university. You want your students to connect with your alumni, to connect with the local tech scene, and to ultimately find employment within it. At the same time though, you’re a university, you see yourselves as the thought leader, and you want to own all the things. My point is that these two positions are largely incompatible when it comes to connecting your engineering students with the community of engineers that surround you, and you’re failing your students in doing so.

Thus I have a radical proposal for universities. Instead of putting all your resources on a sterile room full of machines for your students, how about spending a little into placing them in a less shiny room full of professional engineers on their off-time? Your local hackerspace is no threat to you, instead it’s a priceless resource, so encourage your students to join it. Subsidise them if they can’t afford the monthly membership, the cost is peanuts compared to the benefit. Above all though, don’t try to own the hackerspace, or we’re back to the first paragraph. Just sometimes, good things can happen in a town without the university being involved.

Study Hacker History, And Update It

Looking through past hacks is a great source of inspiration. This week, we saw [Russ Maschmeyer] re-visiting a classic hack by [Jonny Lee] that made use of a Wiimote’s IR camera to fake 3D, or at least provide a compelling parallax effect that’ll fool your brain, without any expensive custom hardware.

[Lee]’s original demo was stunning, and that alone is reason to revisit it. Using the Wiimote as the webcam was inspired back in 2007, because it meant that there was no hard computer vision work to be done in estimating the viewer’s position – the camera only sees IR LEDs anyway. The tradeoff is that you had to wear two IR LEDs on your head, calibrate it just right, and that only the person with the headset on gets the illusion just right.

This is why re-visiting the past can be fruitful. As [Russ] discovered, computing power is so plentiful these days that you could do face/eye position estimation with a normal webcam easier than you could source an old Wiimote. Indeed, he’s getting the positioning so accurate that he’s worried about to which eye he’s projecting the illusion. Clearly, it’s time for a revamp.

So here’s the formula: find a brilliant old hack, and notice if it was hampered by the state of technology back when it was done. Update this using modern conveniences, and voila! You might just find that you can take the idea further, simply because you have more tools in your toolbox. Nothing wrong with standing on the shoulders of giants.

But beware! Time isn’t sitting still for you either. As soon as you make your killer 3D vision hack, VR goggles will become cheap and ubiquitous. So get it done today, before your hack becomes inspiration for the future.

The Singularity Isn’t Here… Yet

So, GPT-4 is out, and it’s all over for us meatbags. Hype has reached fever pitch, here in the latest and greatest of AI chatbots we finally have something that can surpass us. The singularity has happened, and personally I welcome our new AI overlords.

Hang on a minute though, I smell a rat, and it comes in defining just what intelligence is. In my time I’ve hung out with a lot of very bright people, as well as a lot of not-so-bright people who nonetheless think they’re very clever simply because they have a bunch of qualifications and diplomas. Sadly the experience hasn’t bestowed God-like intelligence on me, but it has given me a handle on the difference between intelligence and knowledge.

My premise is that we humans are conditioned by our education system to equate learning with intelligence, mostly because we have flaky CPUs and worse memory, and that makes learning something a bit of an effort. Thus when we see an AI, a machine that can learn everything because it has a decent CPU and memory, we’re conditioned to think of it as intelligent because that’s what our schools train us to do. In fact it seems intelligent to us not because it’s thinking of new stuff, but merely through knowing stuff we don’t because we haven’t had the time or capacity to learn it.

Growing up and making my earlier career around a major university I’ve seen this in action so many times, people who master one skill, rote-learning the school textbook or the university tutor’s pet views and theories, and barfing them up all over the exam paper to get their amazing qualifications. On paper they’re the cream of the crop, and while it’s true they’re not thick, they’re rarely the special clever people they think they are. People with truly above-average intelligence exist, but in smaller numbers, and their occurrence is not a 1:1 mapping with holders of advanced university degrees.

Even the examples touted of GPT’s brilliance tend to reinforce this. It can do the bar exam or the SAT test, thus we’re told it’s as intelligent as a school-age kid or a lawyer. Both of those qualifications follow our educational system’s flawed premise that education equates to intelligence, so as a machine that’s learned all the facts it follows my point above about learning by rote. The machine has simply barfed up what it has learned the answers are onto the exam paper. Is that intelligence? Is a search engine intelligent?

This is not to say that tools such as GPT-4 are not amazing creations that have a lot of potential to do good things aside from filling up the internet with superficially readable spam. Everyone should have a play with them and investigate their potential, and from that will no doubt come some very interesting things. Just don’t confuse them with real people, because sometimes meatbags can surprise you.

Computers For Fun

The last couple years have seen an incredible flourishing of the cyberdeck scene, and probably for about as many reasons as there are individual ’deck designs. Some people get really into the prop-making, some into scrapping old tech or reusing a particularly appealing case, and others simply into the customization possibilities. That’s awesome, and they’re all different motivations for making a computer that’s truly your own.

But I really like the motivation and sentiment behind [Andreas Eriksen]’s PotatoP. (Assuming that his real motivation isn’t all the bad potato puns.) This is a small microcomputer that’s built on a commonly available microcontroller, so it’s not a particularly powerful beast – hence the “potato”. But what makes up for that in my mind is that it’s running a rudimentary bare-metal OS of his own writing. It’s like he’s taken the cyberdeck’s DIY aesthetic into the software as well.

What I like most about the spirit of the project is the idea of a long-term project that’s also a constant companion. Once you get past a terminal and an interpreter – [Andreas] is using LISP for both – everything else consists of small projects that you can check off one by one, that maybe don’t take forever, and that are limited in complexity by the hardware you’re working on. A simple text editor, some graphics primitives, maybe a sound subsystem. A way to read and write files in flash. I don’t love LISP personally, but I love that it brings interactivity and independence from an external compiler, making the it possible to develop the system on the system, pulling itself up by its own bootstraps.

Pretty soon, you could have something capable, and completely DIY. But it doesn’t need to be done all at once either. With a light enough computer, and a good basic foundation, you could keep it in your backpack and play “OS development” whenever you’ve got the free time. A DIY play OS for a sandbox computing platform: what more could a nerd want?

ChatGPT, Bing, And The Upcoming Security Apocalypse

Most security professionals will tell you that it’s a lot easier to attack code systems than it is to defend them, and that this is especially true for large systems. The white hat’s job is to secure each and every point of contact, while the black hat’s goal is to find just one that’s insecure.

Whether black hat or white hat, it also helps a lot to know how the system works and exactly what it’s doing. When you’ve got the source code, either because it’s open-source, or because you’re working inside the company that makes the software, you’ve got a huge advantage both in finding bugs and in fixing them. In the case of closed-source software, the white hats arguably have the offsetting advantage that they at least can see the source code, and peek inside the black box, while the attackers cannot.

Still, if you look at the number of security issues raised weekly, it’s clear that even in the case of closed-source software, where the defenders should have the largest advantage, that offense is a lot easier than defense.

So now put yourself in the shoes of the poor folks who are going to try to secure large language models like ChatGPT, the new Bing, or Google’s soon-to-be-released Bard. They don’t understand their machines. Of course they know how the work inside, in the sense of cross multiplying tensors and updating weights based on training sets and so on. But because the billions of internal parameters interact in incomprehensible ways, almost all researchers refer to large language models’ inner workings as a black box.

And they haven’t even begun to consider security yet. They’re still worried about how to construct obscure background prompts that prevent their machines from spewing hate speech or pornographic novels. But as soon as the machines start doing something more interesting than just providing you plain text, the black hats will take notice, and someone will have to figure out defense.

Indeed, this week, we saw the first real shot across the bow: a hack to make Bing direct users to arbitrary (bad) webpages. The Bing hack requires the user to already be on a compromised website, so it’s maybe not very threatening, but it points out a possible real security difference between Bing and ChatGPT: Bing gives you links to follow, and that makes it a juicy target.

We’re right on the edge of a new security landscape, because even the white hats are facing a black box in the AI. So far, what ChatGPT and Codex and other large language models are doing is trivially secure – putting out plain text – but Bing is taking the first dangerous steps into doing something more useful, both for users and black hats. Given the ease with which people have undone OpenAI’s attempts to keep ChatGPT in its comfort zone, my guess is that the white hats will have their hands full, and the black-box nature of the model deprives them of their best hope. Buckle your seatbelts.