Shut the Backdoor! More IoT Cybersecurity Problems

We all know that what we mean by hacker around here and what the world at large thinks of as a hacker are often two different things. But as our systems get more and more connected to each other and the public Internet, you can’t afford to ignore the other hackers — the black-hats and the criminals. Even if you think your data isn’t valuable, sometimes your computing resources are, as evidenced by the recent attack launched from unprotected cameras connected to the Internet.

As [Elliot Williams] reported earlier, Trustwave (a cybersecurity company) recently announced they had found a backdoor in some Chinese voice over IP gateways. Apparently, they left themselves an undocumented root password on the device and — to make things worse — they use a proprietary challenge/response system for passwords that is insufficiently secure. Our point isn’t really about this particular device, but if you are interested in the details of the algorithm, there is a tool on GitHub, created by [JacobMisirian] using the Trustwave data. Our interest is in the practice of leaving intentional backdoors in products. A backdoor like this — once discovered — could be used by anyone else, not just the company that put it there.

Isolated Case?

Any manufacturer could be malicious or simply compromised. So unless you can inspect the whole stack down to the device level, who knows what is lurking in your system? And the backdoor doesn’t even have to come from the manufacturer. It was widely reported a few years ago that the NSA would intercept new PC hardware in transit, install spyware on it (literally, spyware), and put it back en route to its destination.

In another similar case, network hardware maker Juniper reportedly had a backdoor in many of their firewall and router products. Although the exact source of it is not certain, many reports claim it was also NSA-related. Sound like a conspiracy theory? Maybe, but the same thing happened to RSA security between 2004 to 2013.

It is not hard to conceive of other governments doing these sorts of things either covertly or by pressuring companies to cooperate.

Plan Forward?

So what can we do? If you are paranoid enough, you are going to wind up building everything from discretes and writing all your own software. Obviously that’s not practical. As the technology leaders of society, we must continue to argue that adding (company or government) backdoor access to consumer electronics carries far more risk to society than it does protections.

When a company builds in a backdoor, you might decide you trust the company not to use it. I doubt you assign that much trust but let’s play this out. What if there is a disgruntled employee who knows about it and decides to sell the login information on the side for a profit?

You may trust your government to use the aforementioned spyware (installed during shipping) in a responsible way limited to chasing down bad guys. But it’s likely that at some point the vulnerability will become known and black-hat tools will spring to life to take advantage. There is no technological solution that will let the FBI in but will keep the mafia out when they both know the same password. You can never patch every piece of hardware in the field and we’re talking about a vulnerability specifically designed to grant easy access.

Just as equally important to stopping these backdoors is decriminalizing the security research that discovers them. If white-hat security researchers can find the backdoors you can bet the black-hats will be able to. Right now, responsible disclosure carries with it the risk of being charged with a crime. That is an incentive for our smartest people to avoid looking for (or at least reporting) the holes in our digital defenses.

While we work through those societal issues, the best way to protect your own systems is to think like a hacker (the bad kind of hacker), limit access to what’s absolutely necessary, and monitor things where you can. Is this foolproof? No, but it is just like a lock on your car. Locking your car doesn’t make it impossible to steal, just harder. You are going to have to assume that someone is going to get in somehow. The best thing to do is try to block or, at least, detect their access.

33 thoughts on “Shut the Backdoor! More IoT Cybersecurity Problems

    1. What no one dld and read Vault 7? I think the entire lot including the unreleased is about 100k pages.
      Every intelligence agency in the world uses the same tricks. The majority of those tricks filter into the real world.
      “The Shadow Brokers who previously stole and leaked a portion of the NSA hacking tools and exploits is back with a Bang!”
      I may be wrong but weren’t there like three separate rootkits discovered between three to five years ago that no one knew what they were designed for because they hadn’t been activated at the time? Then those stories dropped off the radar.
      There be some bad mojo out there in the dark.

  1. Sometimes the backdoors are necessary. I put one in some years back on a device so that if the user forgot their password there was a way of resetting it remotely. We also had other levels – there was an admin password that would also work if you knew what it was, and that could be reset by someone if they administered a number of these devices. In a situation where you have a number of these installed, someone reset the admin passwords and then forgot the user passwords, we could remotely reset them without someone having to hit the pencil reset switch.

    I know people who do serious white hat hacking and we worked out some ways that the backdoor wouldn’t be discoverable. One simple thing that devices can do is to simply refuse all access after some number of attempts. Don’t give out distinguishing messages such “invalid password” vs “unknown user”. Even the scheme of announcing that “you’ve reached the maximum number of tries, come back later” tells the hacker that they’re on the right track. Our device simply says you have the wrong password. Try three times and then even if you get it right, it says you didn’t.

      1. If that were true, then no security system would ever work. A system that requires some specific piece of information that only the good guys have, and has no other way of getting in is about as secure as things get. The Wikipedia article is talking about a solution that is simply not visible. When we did is much more active – even if you stumble on the solution you won’t get joy if it wasn’t your first attempt.

        Imagine that I want to keep burglars out of a building. I could surround the house with steel plate to keep it safe until I ‘open’ it with a plasma torch. What you’re saying is that since a good guy can use a torch to cut through it, then a bad guy can too, so it’s not safe.

        I guess we’ll see as we approach another decade of these devices never being hacked into.

        A similar discussion cropped up when people started talking about autonomous cars. It’s inevitable that anything so dependent on communication with the outside is going to be hacked. The conversation came around to the fact that Jeeps can can be disabled by hackers as well. It raises the question of why you’d even put external access into something that doesn’t need it. Why would a camera require a remote login in the first place? Simply link it to whatever is reading from it, then the external interface closes until you hit the pencil switch.

        1. I think the point is that a universal back door on all devices is a huge risk since one device being broken implies you can automate breaking the rest.

          You mentioned:

          A system that requires some specific piece of information that only the good guys have, and has no other way of getting in is about as secure as things get.

          If we’re talking about hardware, then a physical security reset (hold buttons while reboot, etc.) can offer similar functionality with greater security than a backdoor installed to allow remote access reset. As long as there is physical access (so not things that are installed on industrial level systems like pipelines, cell towers, etc) having hardware reset saves the user when they’ve lost their own access, but prevents automated remote exploit of a backdoor.

          1. I’m totally onboard with access to critical guts requiring physical access. After all, if you can be in the room and just steal it, you might as well be allowed to enter passwords. The pencil switch is how we told the users to reset things if necessary.

        2. “I guess we’ll see as we approach another decade of these devices never being hacked into.”

          Where is one of these things? I give it 90 minutes once you post an IP here…

    1. I had a printserver that had no backdoor or reset. Forget the password? It’s a brick. End of story.

      Guess what – it ended up a brick :)

      Excessive for something so utterly benign. Why no physical reset button, I’ll never know.

    2. I would prefer to have to hit a pencil reset switch, if I forgot my password, over an insecure, bugged device. So the backdoor is not necessary, it is for some convenience reason.

    3. @Howard, did you disclose to your customers that you had credentials to the device? Was there a way for the customer to disable your access?

      I’m ok with your backdoor if it eases your maintenance of the device, but only if it’s disclosed, and there’s a way for the customer to disable it, and your backdoor is not accessible to every creep on the internet. And that’s assuming the device still works as designed if you do not have access, and as others have mentioned, a pencil button would restore config/credentials to a default yet secure state. Knowing how you protect that backdoor access is gravy, but ultimately wouldn’t affect me as I would have disabled it on initial setup.

      The threat you’re not acknowledging here is the insider threat. If you or your coworker were about to leave and wanted to wreck havoc on your way out, the opportunity is there with a backdoor. If it required a pencil button reset, I doubt Joe would be calling a bunch of customers on his way out asking them to reset their devices just so he could mess with them.

      “Nobody expects the Spanish inquisition!”

    4. If you have one password for every device then it can be easyli brute forced even if there is lockdown after some number of attempts. You can simply find all of those devices on the internet using Shodan and then try three (or even one) passwords per device. When you run out of devices and still don’t have password then just wait a bit to make sure lockdown ended and then retry untili you succeed.

      1. I love that we live in an era where I can’t tell if you’re using Shodan as shorthand for clever bots and filtering, or if there’s literally a hacking tool named after the antagonist from System Shock.

        What a time to be alive.

    1. t is not possible to make a LASTING compromise between technology and freedom, because technology is by far the more powerful social force and continually encroaches on freedom through REPEATED compromises.
      “Technology Is A More Powerful Social Force Than The Aspiration For Freedom”, item 125
      Theodore John Kaczynski, Ph.D., also known as the Unabomber

      1. The solution to this, as far as I know, became dynamic cross-compilation. You don’t trust one compiler, but you trust that it would take far too much effort to compromise ALL compilers, especially if you had to do so with all in the same way. This is one reason why it’s important to have competing solutions for common tools.

  2. What we need are smart doors. If something looks fishy, the device keeps the suspect busy while contacting the owner and asking permission to shutdown or contact the authorities. A lot of these problems are simply because the black hats can operate completely without discovery. What if they were being watched and simultaneously hacked?

    1. People just never get it.
      A number of issues are addressed here that apply to various aspects of the general topic.
      “In a famous apocryphal story, Sutton was asked by reporter Mitch Ohnstad why he robbed banks. According to Ohnstad, he replied, “Because that’s where the money is.” The quote evolved into Sutton’s law, which is often invoked to medical students as a metaphor for focusing a workup on the most likely diagnosis, rather than wasting time and money investigating every conceivable possibility.

      In his autobiography, Sutton denied originating the pithy rejoinder:

      The irony of using a bank robber’s maxim as an instrument for teaching medicine is compounded, I will now confess, by the fact that I never said it. The credit belongs to some enterprising reporter who apparently felt a need to fill out his copy. I can’t even remember where I first read it. It just seemed to appear one day, and then it was everywhere.

      If anybody had asked me, I’d have probably said it. That’s what almost anybody would say … it couldn’t be more obvious.

      Or could it?

      Why did I rob banks? Because I enjoyed it. I loved it. I was more alive when I was inside a bank, robbing it, than at any other time in my life. I enjoyed everything about it so much that one or two weeks later I’d be out looking for the next job. But to me the money was the chips, that’s all.[1]

      The Redlands Daily Facts published the earliest documented example of Sutton’s law on March 15, 1952 in Redlands, California.[9]

      A corollary, the “Willie Sutton rule,” used in management accounting, stipulates that activity-based costing (in which activities are prioritized by necessity, and budgeted accordingly) should be applied where the highest costs occur, because that is where the biggest savings can be found.”

    2. To complex. To large of an attack surface. Even easier to fool. Too hard to implement.

      You sir, are living in a fools paradise, hoping that a machine could somehow magically know it’s owner like a mother instantly knows something is up with her child. This is a level of recognition that doesn’t actually exist even in life. (that’s why doctors exist)

  3. To play the devil’s advocate, isn’t the lock on your car strong enough?

    Also, what is up with the rash of terrible spelling and grammar in the comments now? There have been errors here and there in just about every HaD article for a long time now, but many of the comments on this article are absolutely atrocious whereas they tended to previously be pretty sensible..

  4. Tim Lloyd, Omega Engineering. He blew away just about everything the company had in their computer system with a pretty simple Netware hack. He was aided by the company not having a decent backup plan with tapes stored in more than one location. Lloyd was the sole person responsible for the backups.

    He failed to clean up his tracks as he was testing his hack, which is how he got caught. When the authorities got onto him, he erased the backup tape he’d stolen. If he had been smarter, he wouldn’t have done that, likely would have received a lesser sentence by turning the tape over intact so Omega could have been up and running again. If he’d been smarter than that, he’d have figured out how to clean up the digital evidence of his testing.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s