We all know that what we mean by hacker around here and what the world at large thinks of as a hacker are often two different things. But as our systems get more and more connected to each other and the public Internet, you can’t afford to ignore the other hackers — the black-hats and the criminals. Even if you think your data isn’t valuable, sometimes your computing resources are, as evidenced by the recent attack launched from unprotected cameras connected to the Internet.
As [Elliot Williams] reported earlier, Trustwave (a cybersecurity company) recently announced they had found a backdoor in some Chinese voice over IP gateways. Apparently, they left themselves an undocumented root password on the device and — to make things worse — they use a proprietary challenge/response system for passwords that is insufficiently secure. Our point isn’t really about this particular device, but if you are interested in the details of the algorithm, there is a tool on GitHub, created by [JacobMisirian] using the Trustwave data. Our interest is in the practice of leaving intentional backdoors in products. A backdoor like this — once discovered — could be used by anyone else, not just the company that put it there.
Any manufacturer could be malicious or simply compromised. So unless you can inspect the whole stack down to the device level, who knows what is lurking in your system? And the backdoor doesn’t even have to come from the manufacturer. It was widely reported a few years ago that the NSA would intercept new PC hardware in transit, install spyware on it (literally, spyware), and put it back en route to its destination.
In another similar case, network hardware maker Juniper reportedly had a backdoor in many of their firewall and router products. Although the exact source of it is not certain, many reports claim it was also NSA-related. Sound like a conspiracy theory? Maybe, but the same thing happened to RSA security between 2004 to 2013.
It is not hard to conceive of other governments doing these sorts of things either covertly or by pressuring companies to cooperate.
So what can we do? If you are paranoid enough, you are going to wind up building everything from discretes and writing all your own software. Obviously that’s not practical. As the technology leaders of society, we must continue to argue that adding (company or government) backdoor access to consumer electronics carries far more risk to society than it does protections.
When a company builds in a backdoor, you might decide you trust the company not to use it. I doubt you assign that much trust but let’s play this out. What if there is a disgruntled employee who knows about it and decides to sell the login information on the side for a profit?
You may trust your government to use the aforementioned spyware (installed during shipping) in a responsible way limited to chasing down bad guys. But it’s likely that at some point the vulnerability will become known and black-hat tools will spring to life to take advantage. There is no technological solution that will let the FBI in but will keep the mafia out when they both know the same password. You can never patch every piece of hardware in the field and we’re talking about a vulnerability specifically designed to grant easy access.
Just as equally important to stopping these backdoors is decriminalizing the security research that discovers them. If white-hat security researchers can find the backdoors you can bet the black-hats will be able to. Right now, responsible disclosure carries with it the risk of being charged with a crime. That is an incentive for our smartest people to avoid looking for (or at least reporting) the holes in our digital defenses.
While we work through those societal issues, the best way to protect your own systems is to think like a hacker (the bad kind of hacker), limit access to what’s absolutely necessary, and monitor things where you can. Is this foolproof? No, but it is just like a lock on your car. Locking your car doesn’t make it impossible to steal, just harder. You are going to have to assume that someone is going to get in somehow. The best thing to do is try to block or, at least, detect their access.