We all know that what we mean by hacker around here and what the world at large thinks of as a hacker are often two different things. But as our systems get more and more connected to each other and the public Internet, you can’t afford to ignore the other hackers — the black-hats and the criminals. Even if you think your data isn’t valuable, sometimes your computing resources are, as evidenced by the recent attack launched from unprotected cameras connected to the Internet.
As [Elliot Williams] reported earlier, Trustwave (a cybersecurity company) recently announced they had found a backdoor in some Chinese voice over IP gateways. Apparently, they left themselves an undocumented root password on the device and — to make things worse — they use a proprietary challenge/response system for passwords that is insufficiently secure. Our point isn’t really about this particular device, but if you are interested in the details of the algorithm, there is a tool on GitHub, created by [JacobMisirian] using the Trustwave data. Our interest is in the practice of leaving intentional backdoors in products. A backdoor like this — once discovered — could be used by anyone else, not just the company that put it there.
Isolated Case?
Any manufacturer could be malicious or simply compromised. So unless you can inspect the whole stack down to the device level, who knows what is lurking in your system? And the backdoor doesn’t even have to come from the manufacturer. It was widely reported a few years ago that the NSA would intercept new PC hardware in transit, install spyware on it (literally, spyware), and put it back en route to its destination.
In another similar case, network hardware maker Juniper reportedly had a backdoor in many of their firewall and router products. Although the exact source of it is not certain, many reports claim it was also NSA-related. Sound like a conspiracy theory? Maybe, but the same thing happened to RSA security between 2004 to 2013.
It is not hard to conceive of other governments doing these sorts of things either covertly or by pressuring companies to cooperate.
Plan Forward?
So what can we do? If you are paranoid enough, you are going to wind up building everything from discretes and writing all your own software. Obviously that’s not practical. As the technology leaders of society, we must continue to argue that adding (company or government) backdoor access to consumer electronics carries far more risk to society than it does protections.
When a company builds in a backdoor, you might decide you trust the company not to use it. I doubt you assign that much trust but let’s play this out. What if there is a disgruntled employee who knows about it and decides to sell the login information on the side for a profit?
You may trust your government to use the aforementioned spyware (installed during shipping) in a responsible way limited to chasing down bad guys. But it’s likely that at some point the vulnerability will become known and black-hat tools will spring to life to take advantage. There is no technological solution that will let the FBI in but will keep the mafia out when they both know the same password. You can never patch every piece of hardware in the field and we’re talking about a vulnerability specifically designed to grant easy access.
Just as equally important to stopping these backdoors is decriminalizing the security research that discovers them. If white-hat security researchers can find the backdoors you can bet the black-hats will be able to. Right now, responsible disclosure carries with it the risk of being charged with a crime. That is an incentive for our smartest people to avoid looking for (or at least reporting) the holes in our digital defenses.
While we work through those societal issues, the best way to protect your own systems is to think like a hacker (the bad kind of hacker), limit access to what’s absolutely necessary, and monitor things where you can. Is this foolproof? No, but it is just like a lock on your car. Locking your car doesn’t make it impossible to steal, just harder. You are going to have to assume that someone is going to get in somehow. The best thing to do is try to block or, at least, detect their access.
ring.cx sound be a smart solution to solve the drawbacks you describe;is n’it?
https://blog.savoirfairelinux.com/en-ca/2015/internet-things-ring-connected-devices-iot/?noredirect=en_CA
Sometimes the backdoors are necessary. I put one in some years back on a device so that if the user forgot their password there was a way of resetting it remotely. We also had other levels – there was an admin password that would also work if you knew what it was, and that could be reset by someone if they administered a number of these devices. In a situation where you have a number of these installed, someone reset the admin passwords and then forgot the user passwords, we could remotely reset them without someone having to hit the pencil reset switch.
I know people who do serious white hat hacking and we worked out some ways that the backdoor wouldn’t be discoverable. One simple thing that devices can do is to simply refuse all access after some number of attempts. Don’t give out distinguishing messages such “invalid password” vs “unknown user”. Even the scheme of announcing that “you’ve reached the maximum number of tries, come back later” tells the hacker that they’re on the right track. Our device simply says you have the wrong password. Try three times and then even if you get it right, it says you didn’t.
https://en.wikipedia.org/wiki/Security_through_obscurity
You didn’t get the message. If a good guy can do it a bad guy can do it.
If that were true, then no security system would ever work. A system that requires some specific piece of information that only the good guys have, and has no other way of getting in is about as secure as things get. The Wikipedia article is talking about a solution that is simply not visible. When we did is much more active – even if you stumble on the solution you won’t get joy if it wasn’t your first attempt.
Imagine that I want to keep burglars out of a building. I could surround the house with steel plate to keep it safe until I ‘open’ it with a plasma torch. What you’re saying is that since a good guy can use a torch to cut through it, then a bad guy can too, so it’s not safe.
I guess we’ll see as we approach another decade of these devices never being hacked into.
A similar discussion cropped up when people started talking about autonomous cars. It’s inevitable that anything so dependent on communication with the outside is going to be hacked. The conversation came around to the fact that Jeeps can can be disabled by hackers as well. It raises the question of why you’d even put external access into something that doesn’t need it. Why would a camera require a remote login in the first place? Simply link it to whatever is reading from it, then the external interface closes until you hit the pencil switch.
I think the point is that a universal back door on all devices is a huge risk since one device being broken implies you can automate breaking the rest.
You mentioned:
If we’re talking about hardware, then a physical security reset (hold buttons while reboot, etc.) can offer similar functionality with greater security than a backdoor installed to allow remote access reset. As long as there is physical access (so not things that are installed on industrial level systems like pipelines, cell towers, etc) having hardware reset saves the user when they’ve lost their own access, but prevents automated remote exploit of a backdoor.
What you say is true. Just remember that social hacking/social engineering is often times easier than the virtual approach. There is also this: https://xkcd.com/538/
I’m totally onboard with access to critical guts requiring physical access. After all, if you can be in the room and just steal it, you might as well be allowed to enter passwords. The pencil switch is how we told the users to reset things if necessary.
“I guess we’ll see as we approach another decade of these devices never being hacked into.”
Where is one of these things? I give it 90 minutes once you post an IP here…
I had a printserver that had no backdoor or reset. Forget the password? It’s a brick. End of story.
Guess what – it ended up a brick :)
Excessive for something so utterly benign. Why no physical reset button, I’ll never know.
I would prefer to have to hit a pencil reset switch, if I forgot my password, over an insecure, bugged device. So the backdoor is not necessary, it is for some convenience reason.
@Howard, did you disclose to your customers that you had credentials to the device? Was there a way for the customer to disable your access?
I’m ok with your backdoor if it eases your maintenance of the device, but only if it’s disclosed, and there’s a way for the customer to disable it, and your backdoor is not accessible to every creep on the internet. And that’s assuming the device still works as designed if you do not have access, and as others have mentioned, a pencil button would restore config/credentials to a default yet secure state. Knowing how you protect that backdoor access is gravy, but ultimately wouldn’t affect me as I would have disabled it on initial setup.
The threat you’re not acknowledging here is the insider threat. If you or your coworker were about to leave and wanted to wreck havoc on your way out, the opportunity is there with a backdoor. If it required a pencil button reset, I doubt Joe would be calling a bunch of customers on his way out asking them to reset their devices just so he could mess with them.
“Nobody expects the Spanish inquisition!”
Come on then, name the device :)
If you have one password for every device then it can be easyli brute forced even if there is lockdown after some number of attempts. You can simply find all of those devices on the internet using Shodan and then try three (or even one) passwords per device. When you run out of devices and still don’t have password then just wait a bit to make sure lockdown ended and then retry untili you succeed.
I love that we live in an era where I can’t tell if you’re using Shodan as shorthand for clever bots and filtering, or if there’s literally a hacking tool named after the antagonist from System Shock.
What a time to be alive.
Indeed, times are beautiful, because such tool exists. I was talking about shodan.io. It’s a search engine for internet connected devices.
Can you trust your compiler?
Here is a paper written by Ken Thompson how to bake in the malware in the binary code of a compiler:
http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
Yes I can, https://en.wikipedia.org/wiki/CompCert
The solution to this, as far as I know, became dynamic cross-compilation. You don’t trust one compiler, but you trust that it would take far too much effort to compromise ALL compilers, especially if you had to do so with all in the same way. This is one reason why it’s important to have competing solutions for common tools.
https://www.dwheeler.com/trusting-trust/
Wipe the foam from your mouth and please take the time to be articulate.
All unmanned and some manned security have weaknesses exploitable that’s a given someone put it together someone will take it apart faster.
What we need are smart doors. If something looks fishy, the device keeps the suspect busy while contacting the owner and asking permission to shutdown or contact the authorities. A lot of these problems are simply because the black hats can operate completely without discovery. What if they were being watched and simultaneously hacked?
They would just do denial of service on monitoring system by triggering all alerts constantly using some poor chap’s compromised computer.
To complex. To large of an attack surface. Even easier to fool. Too hard to implement.
You sir, are living in a fools paradise, hoping that a machine could somehow magically know it’s owner like a mother instantly knows something is up with her child. This is a level of recognition that doesn’t actually exist even in life. (that’s why doctors exist)
Jumper reset ftw.
Gotta love a simple, elegant solution.
To play the devil’s advocate, isn’t the lock on your car strong enough?
Also, what is up with the rash of terrible spelling and grammar in the comments now? There have been errors here and there in just about every HaD article for a long time now, but many of the comments on this article are absolutely atrocious whereas they tended to previously be pretty sensible..
People (like me) with cell phones.
Tim Lloyd, Omega Engineering. He blew away just about everything the company had in their computer system with a pretty simple Netware hack. He was aided by the company not having a decent backup plan with tapes stored in more than one location. Lloyd was the sole person responsible for the backups.
He failed to clean up his tracks as he was testing his hack, which is how he got caught. When the authorities got onto him, he erased the backup tape he’d stolen. If he had been smarter, he wouldn’t have done that, likely would have received a lesser sentence by turning the tape over intact so Omega could have been up and running again. If he’d been smarter than that, he’d have figured out how to clean up the digital evidence of his testing.