Ahh DEF CON! One group of hackers shows off how they’ve broken into all sorts of cool devices and other hackers (ahem… “security professionals”) lament the fact that the first group were able to do so. For every joyous “we rooted the Nest thermostat, now we can have fun” there’s a doom-mongering “the security of network-connected IoT devices is totally broken!”.
And like Dr. Jekyll and Mr. Hyde, these two sides of the hacker persona can coexist within the same individual. At Hackaday, we’re totally paranoid security conscious, but we also like to tinker with stuff. We believe that openness and security are best friends forever. If you can open it, you can see if it’s well-made inside, at least in principle. How do we reconcile this with the security professional’s demand for devices that only accept signed binary firmware updates so that they can’t be tampered with?
We’ve got no answers, but we’ve got plenty of questions. Read on, and let us know what you think.
On Hackability vs. Security
How many home-automation hackers have gotten their start by “reversing” the simple radio protocol that those el-cheapo 432 MHz sockets use? We’ve seen our fair share of projects. (And an Arduino library.) Why? Because they’re cheap and because it’s easy. They’ve got five bits for the channel ID, everything else is straightforward, and you can use any one-dollar 432 MHz transmitter to get the job done. It’s like the RF garage-door openers of old, only simpler. For the tinkerer in us, these RF power sockets are a godsend.
But from a security perspective, they’re a disaster. Of course, the sockets could be equipped with a much more complicated unique ID to increase security. But that raises the barrier to DIY hacking with the device (not that it would stop anyone) and still doesn’t protect you against replay attacks anyway. Totally insecure!
Now the risk of abuse of these RF-controlled power sockets is pretty small. Unlike the garage door example, nobody is breaking into your house by turning your hallway lights on and off. Even if they were, they’d have to get fairly close to your house to do so. If you’ve got someone willing to camp outside your house with RF gear, you’ve got trouble already. So perhaps the balance between hackability and security is ok for these devices?
Enter the IoT
This changes when one brings the Internet to the Things. Exposing yourself not just to your neighbors, but to the whole world, dramatically enlarges the attack surface. Not like we need to be told this. But for some device manufacturers, it was a shocking realization, and they’re responding by locking everything down, and we get sold this story that it’s to protect the consumer from the hacker. IoThings must be secured! You don’t want strangers screaming at your baby, right? (Hint: change the default password.)
But what happens when the hacker and the consumer are the same person? We all know that there’s an embedded Linux distribution inside the Sony BDP-S5100 Blu-Ray player, and we all want at it, but Sony won’t let us play with it because they also want to prevent hackers from getting at it. (Not that it stops anyone.) It’s supposedly made more secure by not being modifiable.
We think not. And a decent consumer counterexample is the Nexus series of smartphones. With a few clicks you can unlock the bootloader and load up a custom OS on the device. Because the bootloader normally requires physical access, this isn’t particularly a security problem. Because you can flash whatever the heck you want in there, the phone is vastly modifiable. Want root? Get root. The Sony Blu-Ray player could be the same.
It’s all about how you give control to the consumer to modify their own device, and there are more or less secure ways to do so. Then why do we see so many devices simply locked down, with no allowances for modifiability? Are the manufacturers just lazy? Or are hackers just too small a market to matter?
Hardware with a “Service”
We fear that there’s something yet more sinister afoot: the razor-blade pricing model. You get the razor for free, but you’ve got to buy corresponding blades at a markup. Or you buy the inkjet printer cheap, but pay ridiculous sums for ink cartridges (Corey Doctorow touched on this in his DEF CON talk). Or you buy the Kodak Brownie camera for $1 in 1900, and make the Eastman Kodak film company dominant for nearly a century.
Now there’s nothing wrong with this pricing model as long as the consumer knows what they’re getting into ahead of time. But suppose you’re a hacker and you’d like to do something out of the ordinary?
Take the Wink Hub, which was busted at last year’s Defcon. It’s a great home-automation device, and at $50 it’s cheap for what it does. But you have to use their app, run through their online service, to control the electronics in your own home. Want to connect to the Wink from your computer? Sorry. Your tablet? Nope. Run your own server? Dream on.
And why? We don’t think that it’s because of security, here. Instead, it’s that whatever data they’re harvesting from you is worth cash money, and they’ve got a vested interest in keeping you from hacking that away from them. And you can’t really blame them — their business model relies on the revenue stream. They can’t give away the razors if they can’t make up their money on the blades.
But as an unfortunate byproduct of this business model, if you want to integrate your Wink into your OpenHAB system, you’ve got to break your way into the device. Which means that you’re always going to be fighting with the manufacturer, and that’s a shame.
Ideas?
We hackers are Jekyll and Hyde; we insist on devices being open(able) and secure. And what’s worse, we’d like them cheap. It’s not clear we can have all of these things at once, and maybe it’s important to think about the tradeoffs. One man’s insecure firmware is another’s extensible and debuggable firmware, and what “security” even means may depend on whether you’re asking the consumer or the device manufacturer.
What’s your take on IoT security? Can one have too much security? Are security and hackability in conflict or are they mutual prerequisites? Do you have better examples? Can we hope for inexpensive, modifiable, and secure gear? Or do we just gotta keep hacking?
It is a saying popular in the military, the tougher you make it for the enemy to get in, the tougher you make it to get out.
Too much security? Yes, if you are trying to get into your car late at night in a questionable neighborhood, you don’t want to be standing outside the door for 60 seconds unlocking it. (but you also want to be sure no one is hiding in the back seat).
Just thought I’d pipe up to say that it’s been a treat that you’ve (Elliot) been writing and posting more frequently. Always appreciate quality articles where the author genuinely cares about the material and engages their readership, whether a given read is my particular cup of tea or not.
+1
+2. Really well-written oped.
Likewise, same goes to the other new writers too.
Positioning Security and Hackability as opposite ends of a spectrum is a red herring, and one that companies with additional agendas like to use. Sure, bad security directly leads to good hackability, but the other way round isn’t necessarily true.
Security is about well defined and ensured access control: who can do what when. Hackability is the property that the user and/or owner of a device is allowed to do everything, always.
When a company locks down access to all but themselves, it means they either a) are lazy (footnote: or, less strong: don’t have enough resources to implement something that they don’t *directly* benefit from, and which might increase cost by having to service additional support requests), b) have another agenda (like selling ink, coffee pads, personal data), or c) are stupid/are consulted by stupid people (footnote: most theories based on “these people are simply stupid” are wrong). Yes, the company would need to implement a mechanism to let authorized parties bypass whatever they put in place for actual security, but the additional effort for that should be minimal in most cases.
You listed the Nexus bootloader already. Other examples:
* TPMs protect all sorts of stuff against all sorts of adversaries, but “taking ownership” and erasing everything can always be done with “physical presence” as the only authentication necessary.
* In the old days Nokia sold one of the first phones with NFC and a secure element in Europe. The security architecture was rather limited, and had basically an all-or-nothing model. Nokia held the keys for the secure element and would arrange with third parties that wanted to do something with your phone’s chip, so that having multiple applications side-by-side was possible. But they also had a service where you could say “give me the keys”, and Nokia would hand over all keys to the handset owner, to hack with as they like.
In the end it comes down to all of us in the engineering business: Whenever you build/design something for your company, make sure it stays hackable. Lobby your superiors to let you do that. Tell them that this is what has made the Internet great. Threaten to go to a more open company if possible or, as a fallback, use strategic incompetence: build in an easily discoverable back door, that doesn’t threaten outward security.
As consumers we should lobby companies (especially the new, slick, hipster ones) to let us do what we want with hardware we bought. We can tell them that it’s not impossible to have both kinds of music, regular and alternative revenue models. Point them to the Amazon Kindle, for example, which is available in an ad supported version at a discount.
In all of this, there is a very special place for “cloud” companies, that seem to be sprouting left and right. There is no such thing as “the cloud”, as the saying goes, only other people’s computers. Enterprises are putting operations on remote computers that have absolutely no business being there. You mentioned the Wink hub. A less egregious example in my own home: I have IPTV via the largest German telecom, which requires a special DVR box. There is a mobile phone remote control app for this DVR. How does the app find the DVR in my WiFi? Well, both the app and the DVR connect to a central server, of course, which can then associate them with each other because both come from the same customer’s uplink. Talk about Rube-Goldberg devices…
Some of these new devices aren’t devices at all. They are thin clients, and you don’t buy a device, you buy the ability (not necessarily the right) to access a service that’s provided in the cloud. Amazon Echo is a current example for that. Why the hell does it need to be dongled to Amazon’s server farm? And don’t give me crap about computing power: snappy speech recognition isn’t that hard (well, maybe it is on the Echo hardware, but certainly not on the PC that can play The Witcher 3 in 4k at 60fps that sits right next to it). The reason here is a new version of “perpetual beta” (or as hip kids call it nowadays: DevOps, YOLO). Amazon wants to gather all data it can, use it to develop a new software version, and then deploy the new version without you noticing it. Yeah, they *could* do that with local –non-cloud– software, telemetrics, and automatic software updates, but the “cloud” way is much easier. Plus you get customer data you can possibly use for future fun and profit.
As a warning I have an example from the past: Anyone remember Chumby? That was a nice, lovable, *squashy* alarm clock with a colour touch screen and WiFi connectivity (in 2007!). You could customize your Chumby with all sorts of widgets, it could play internet radio, and display live information from a variety of sources. You could even write your own widgets fairly easily (in Flash). In theory, it was great. Due to a number of circumstances, Chumby Industries, Inc. is no more. The problem: Your Chumby is now a very cute, but rather dead, paper weight. Turns out, the Chumby stored almost nothing on board. Instead there was minimal software that connected to a central server and downloaded the configuration and widget software to be executed. This was great in the sense that Chumbys were easily configured on the manufacturers website. It’s not so great if anything between your alarm clock and the manufacturer (including the manufacturer themselves) fails.
So, in addition to my appeal to engineers above, there’s a special appeal for designers of cloud services: Don’t. If your “device” requires a specific remote service to operate, you’re doing it wrong. And it’s not so hard to do it right. Design for interfaces, not services, then publish interface specifications and allow the user/owner of the device to change the service used. There’s nothing wrong with building something that by default connects to your own service. But always make it so that that’s a special case of its general functionality. Again: this is not that hard. In the end you might even save yourselves some trouble, because you’re forced to think about your interfaces beforehand. To score extra geek points, you could provide the server software in an open source model. It doesn’t cost you anything extra, but will make sure that your device stays useful for CPU generations to come, even after your own company’s death.
There’s one question I like to ask the engineers of companies I consult for: What happens if something happens to you, your infrastructure, or, god forbid, your company (e.g. consider the worst case: You’re being bought up by Google)?
In the end this is not only a question of Hackability, but of Dependability and Resiliency. A common trope is the Internet’s resilience to (nuclear) war, among other reasons for widespread, but partial, outage. Shouldn’t the things you design be held to the same standard?
—
Henryk Plötz
Grüße aus Berlin
EE, CPE here, I’ve been trying in my company to use open source everything since I’ve arrived. Two of the other younger guys are doing this as well and the older generation actually agrees and actively allow us to make these decisions. Hardware and software is only getting better over time.
Wow! Thanks for the very well thought-out and well written reply.
+1 with a vengeance.
There is another mentality here, to consider results of Research and Development as Intellectual property of the company. And why give property to other companies or even customers?
When consulting to a startup i was asked to keep ALL software i know locked in a NDA, when i was using all open source code, merely gluing open software together, to build prototypes which would take years to build from scratch.
To make sure that it would have remained open i didn’t signed for development, but configuration and testing of open source and open hardware.
Closed code would mean that if the company failed nobody can get the product working again, i must not accept similar jobs even after years, need to ask the company to get a license to use my code, no peer review,…
How much is each node in a botnet worth these days?
1. Sell open source router / Internet thing for £5 less than the going rate on eBay / amazon marketplace
2. Install modified firmware with botnet / backdoor before shipping
3. Rent out botnet, sell data mined from device, possibly use to discover and infect other vulnerable devices on the users network (with code for more botnet nodes, keyloggers, ransomware).
You can only stop that by only accepting signed firmware, since the attacker has had physical access to the device. The manufacturer could only accept signed firmware but allow booting from external memory (e.g USB stick, SD Card) but the attacker could bundle/pre-install that and most users would leave it plugged in.
If the device has an LCD, then it can show a warning, but for smaller Internet things (e.g light bulb) it’s tricky.
I agree, security is only truly achievable with education. If you truly want to be secure you must truly want to learn. So the “victim” you describe is not really a victim, apart from attack by his own unwillingnes to learn and think critically. Your argument is not only applicable to the ebay reseller, but to every market chain. (Walmart, customs, …. could do the very same as what you describe). True security would require open source hardware, and a direct or indirect understanding of this hardware. This may seem impossible to achieve, but that is because you only consider direct understanding. If society maintained formalized belief systems, statements of which could be verified with proof checkers, then anyone can verify hardware designs etc, …
How does this apply to the consumer? How much learning are they willing or can do? No mention of Hello Barbi at DefCon.
You can download MetaMath a proof checker, and it’s database set.mm, and it’s book for free. If you read the book, and reimplement say the python version (like 300 lines or so) in a different language (to make sure you understood every step in the algorithm) and verify the database, then you can know with certainty that each of the 18000 theorems with proofs follow from the axioms, without even understanding what the theorems say. This is what I mean with indirect understanding.
Define a belief system to be a set of axioms, then using proof checkers one can verify proofs. As soon as *anyone* finds a contradiction, “true=false” is proven in this belief system, and *every* statement in the belief system is both true and false and hence useless. The only mitigation is switching to a different belief system, or dropping one of the conflicting axioms (and all theorems that rest upon it). So dropping an axiom is also a change of belief system…
After “publish or perish” for Open Source, we will have “formalize or fossilize” for Open Understanding.
Since machine verification requires the proofs to be in a machine readable format, educational software could be built to train and maintain ready knowledge of axioms, theorems, proofs, belief systems… (so that the user can transform indirect understanding to direct understanding)
So when choosing what to universally teach to children as a minimum (like alphabetism) refrain from domain specific knowledge, but definitely teach everything required to understand proof checkers, belief systems, … so that they know how to gather human knowledge into a coherent belief system, and can verify proofs of claims they do not belief in an automated way, so they know when to stop being stubborn.
One can not prove cryptographic primitives to be unbroken, one can only claim as an axiom that no algorithm to break it with high probability with tractable scaling of requisite resources is publically known yet.
This even changes the concept of democracy, if humanity maintained a belief system, only one person needs to find an inconsistency and the rest is forced to resolve the problem. We could vote things like “use known set of rules A (with known provable properties) until someone finds a set of rules B (with stronger desirable requested properties)”…
1. Define an all-encompassing belief system
2. ??????
3. Now that everybody has accepted and internalized not only the same belief system, but the same epistemological and metaphysical frameworks for talking about beliefs, and has collectively agreed to live lives guided by this system psychology and culture be damned, utopia is achieved.
The primary issue is that most current IOT architectures are “flat” cloud based architectures that force IOT communications, quite unnecessarily, to go through a proprietary cloud. The flat, IOT -> Cloud -> User architecture lacks transparency for the user, quality security protocols and the ability to layer security under your control. As the blog states, this is because the companies pushing cloud based architectures want to sell your data. This also means that they, not you, have control over your IOT objects.
Some companies, such as Qualcomm, with their Alljoyn Alliance are making a small step in the right direction in that their architecture is described as “cloud optional”. I am working on a “Massively Distributed” IOT architecture to eliminate the “middleman” (the IOT clouds). In this architecture, you, the user, talk directly to “Object Servers” in your home or elsewhere, under YOUR control. These Object Servers are essentially Routers on steroids which do much more than simply route requests to IOT objects. They add an extra layer of functionality and security. Right now, most easy to use IOT software seems to be cloud based. I currently use PHP on a RPi to allow me to remotely access my objects but I would love to find software tailored for IOT that I can run on my own local servers to access my objects, such that I can very easily add and modify objects without a lot of recoding. Any suggestions?
@Dave, I need some clarification on what you need here but if it’s what I think you mean then I have already been thinking of how to go about for a while.
Can you PM me?
https://hackaday.io/Hacker404