Bad RSA Library Leaves Millions Of Keys Vulnerable

So, erm… good news everyone! A vulnerability has been found in a software library responsible for generating RSA key pairs used in hardware chips manufactured by Infineon Technologies AG. The vulnerability, dubbed ROCA, allows for an attacker, via a Coppersmith’s attack, to compute the private key starting with nothing more than the public key, which pretty much defeats the purpose of asymmetric encryption altogether.

Affected hardware includes cryptographic smart cards, security tokens, and other secure hardware chips produced by Infineon Technologies AG. The library with the vulnerability is also integrated in authentication, signature, and encryption tokens of other vendors and chips used for Trusted Boot of operating systems. Major vendors including Microsoft, Google, HP, Lenovo, and Fujitsu already released software updates and guidelines for mitigation.

The researchers found and analysed vulnerable keys in various domains including electronic citizen documents (750,000 Estonian identity cards), authentication tokens, trusted boot devices, software package signing, TLS/HTTPS keys and PGP. The currently confirmed number of vulnerable keys found is about 760,000 but could be up to two to three orders of magnitude higher.

Devices dating back to at least 2012 are affected, despite being NIST FIPS 140-2 and CC EAL 5+ certified.. The vulnerable chips were not necessarily sold directly by Infineon Technologies AG, as the chips can be embedded inside devices of other manufacturers.

Continue reading “Bad RSA Library Leaves Millions Of Keys Vulnerable”

Aussies Propose Crackdown On Insecure IoT Devices

We’ve all seen the stories about IoT devices with laughably poor security. Both within our community as fresh vulnerabilities are exposed and ridiculed, and more recently in the wider world as stories of easily compromised baby monitors have surfaced in mass media outlets. It’s a problem with its roots in IoT device manufacturers treating their products as appliances rather than software, and in a drive to produce them at the lowest possible price.

The Australian government have announced that IoT security is now firmly in their sights, announcing a possible certification scheme with a logo that manufacturers would be able to use if their products meet a set of requirements. Such basic security features as changeable, non-guessable, and non-default passwords are being mentioned, though we’re guessing that would also include a requirement not to expose ports to the wider Internet. Most importantly it is said to include a requirement for software updates to fix known vulnerabilities. It is reported that they are also in talks with other countries to harmonize some of these standards internationally.

It is difficult to see how any government could enforce such a scheme by technical means such as disallowing Internet connection to non-compliant devices, and if that was what was being proposed it would certainly cause us some significant worry. Therefore it’s likely that this will be a consumer certification scheme similar to for example the safety standards for toys, administered as devices are imported and through enforcement of trading standards legislation. The tone in which it’s being sold to the public is one of “Think of the children” in terms of compromised baby monitors, but as long-time followers of Hackaday will know, that’s only a small part of the wider problem.

Thanks [Bill Smith] for the tip.

Baby monitor picture: Binatoneglobal [CC BY-SA 3.0].

Encryption For The Most Meager Of Devices

It seems that new stories of insecure-by-design IoT devices surface weekly, as the uneasy boundary is explored between the appliance and the Internet-connected computer. Manufacturers like shifting physical items rather than software patches, and firmware developers may not always be from the frontline of Internet security.

An interesting aside on the security of IoT traffic comes from [boz], who has taken a look at encryption of very low data rate streams from underpowered devices. Imagine perhaps that you have an Internet-connected sensor which supplies only a few readings a day that you would like to keep private. Given that your sensor has to run on tiny power resources so a super-powerful processor is out of the question, how do you secure your data? Simple encryption schemes are too easily broken.

He makes the argument for encryption from a rather unexpected source: a one-time pad. We imagine a one-time pad as a book with pages of numbers, perhaps as used by spies in Cold-War-era East Berlin or something. Surely storing one of those would be a major undertaking! In fact a one-time pad is simply a sequence of random keys that are stepped through, one per message, and if your message is only relatively few bytes a day then you have no need to generate more than a few K of pad data to securely encrypt it for years. Given that even pretty meager modern microcontrollers have significant amounts of flash at their disposal, pad storage for sensor data of this type is no longer a hurdle.

Where some controversy might creep in is the suggestion that a pad could be recycled when its last entry has been used. You don’t have to be a cryptologist to know that reusing a one-time pad weakens the integrity of the cypher, but he has a valid answer there too, If the repeat cycle is five years, your opponent must have serious dedication to capture all packets, and at that point it’s worth asking yourself just how sensitive the sensor data in question really is.

Screwdriving

Screwdriving! It’s like wardriving but instead of discovering WiFi networks, the aim is to discover Bluetooth Low Energy (BLE)  devices of a special kind: adult toys. Yes, everything’s going to be connected, even vibrators. Welcome to the 21st century.

Security researcher [Alex Lomas] recently found that a lot of BLE-enabled adult toys are completely vulnerable to malicious attacks. In fact, they are basically wide open to anyone by design.

“Adult toys lend themselves to being great testbeds for IoT research: they’re BLE, they’re relatively cheap, they’re accessible and have companion apps for the full spectrum of testing.”

Yes… great test beds… Erm, anyway, [Alex Lomas] found that there is no PIN nor password protection, or the PIN is static and generic (0000 / 1234) on every Bluetooth adult toy analysed. Manufacturers don’t want to go through the hassle, presumably because sex toys lack displays that would enable a classic Bluetooth pairing, with random PIN and so on. While this might be a valid point, almost all electronic appliances have an “ON/OFF” button for input and some LED (or even vibration in these cases) that allow some form of output. It could be done, and it’s not like vibrators are the only minimalistic appliances out there in the IoT world.

Although BLE security is crippled by design (PDF), it is possible to add security on top of flawed protocols. The average web-browser does it all the time. The communications don’t have to be clear-text where you can literally see “Vibrate:10” flying around in packets. Encryption could be implemented on top of the BLE link between the app and the device, for instance. Understandably, security in some devices is not absolutely critical. That being said, the security bar doesn’t have to be lowered to zero — it’s not safe for work or play.

[via Arstechnica]

Your Hard Disk As An Accidental Microphone

We’re used to attaching peripherals to our computers, when we have a need for them to interact with the world around them. An Arduino Uno needs a shield to turn on the lights, for example. Just sometimes though there is the potential for unintended interaction between a computer and the real physical world which surrounds it, and it’s one of those moments that [Alfredo Ortega] has uncovered in his talk at the EKO Party conference in Buenos Aires. He demonstrates how a traditional spinning-rust computer hard disk interacts with vibration in its surroundings, and can either become a rudimentary microphone, or be compromised by sound at its resonant frequency (PDF).

It seems that you can measure the response time of the hard drive head during a read operation without requiring any privilege escalation. This timing varies with vibration, so can be used to reconstruct the sound that the drive is facing. Thus it becomes a microphone, albeit not a very good one with a profoundly bass-heavy response. He goes on to investigate the effect of sound on the drive, discovering that it has a resonant frequency at which the vibration causes it to be unreadable.

Sadly the talk itself appears not yet to be online, but given that previous years’ EKO talks are on YouTube it is likely that when the dust has settled you will be able to see it in full. Meanwhile he’s posted a video demonstration which we’ve posted below the break.

Continue reading “Your Hard Disk As An Accidental Microphone”

Ask Hackaday: Security Questions And Questionable Securities

Your first school. Your mother’s maiden name. Your favorite color. These are the questions we’re so used to answering when we’ve forgotten a password and need to get back into an account. They’re not a password, yet in many cases have just as much power. Despite this, they’re often based on incredibly insecure information.

Sarah Palin’s Yahoo account is perhaps the best example of this. In September 2008, a Google search netted a birthdate, ZIP code, and where the politician met her spouse. This was enough to reset the account’s password and gain full access to the emails inside.

While we’re not all public figures with our life stories splashed across news articles online, these sort of questions aren’t exactly difficult to answer. Birthdays are celebrated across social media, and the average online quiz would net plenty of other answers. The problem is that these questions offer the same control over an account that a password does, but the answers are not guarded in the same way a password is.

For this reason, I have always used complete gibberish when filling in security questions. Whenever I did forget a password, I was generally lucky enough to solve the problem through a recovery e-mail. Recently, however, my good luck ran out. It was a Thursday evening, and I logged on to check my forex trading account. I realised I hadn’t updated my phone number, which had recently changed.

Upon clicking my way into the account settings, I quickly found that this detail could only be changed by a phone call. I grabbed my phone and dialed, answering the usual name and date of birth questions. I was all set to complete this simple administrative task! I was so excited.

“Thanks Lewin, I’ll just need you to answer your security question.”

“Oh no.”

“The question is… Chutney butler?”

“Yes. Yes it is. Uh…”

“…would you like to guess?”

Needless to say, I didn’t get it.

I was beginning to sweat at this point. To their credit, the call center staffer was particularly helpful, highlighting a number of ways to recover access to the account. Mostly involving a stack of identification documents and a visit to the nearest office. If anything, it was a little reassuring that my account details required such effort to change. Perhaps the cellular carriers of the world could learn a thing or two.

In the end, I realised that I could change my security question with my regular password, and then change the phone number with the new security question. All’s well that ends well.

How do You Deal with Security Questions?

I want to continue taking a high-security approach to my security questions. But as this anecdote shows, you do occasionally need to use them. With that in mind, we’d love to hear your best practices for security questions on accounts that you care about.

Do you store your answers in a similar way to your passwords, using high entropy to best security? When you are forced to use preselected questions do you answer honestly or make up nonsensical answers (and how do you remember what you answered from one account to the next)? When given the option to choose your own questions, what is your simple trick that ensures it all makes sense to you at a later date?

We’d love to hear your best-practice solutions in the comments. While you ponder those questions, one mystery will remain, however — the answer to the question that nobody knows: Chutney butler?

OptionsBleed – Apache Bleeds In Uncommon Configuration

[Hanno Böck] recently uncovered a vulnerability in Apache webserver, affecting Apache HTTP Server 2.2.x through 2.2.34 and 2.4.x through 2.4.27. This bug only affects Apache servers with a certain configuration in .htaccess file. Dubbed Optionsbleed, this vulnerability is a use after free error in Apache HTTP that causes a corrupted Allow header to be replied by the webserver in response to HTTP OPTIONS requests. This can leak pieces of arbitrary memory from the server process that may contain sensitive information. The memory pieces change after multiple requests, so for a vulnerable host an arbitrary number of memory chunks can be leaked.

Unlike the famous Heartbleed bug in the past, Optionsbleed leaks only small chunks of memory and more importantly only affects a small number of hosts by default. Nevertheless, shared hosting environments that allow for .htaccess file changes can be quite sensitive to it, as a rogue .htaccess file from one user can potentially bleed info for the whole server. Scanning the Alexa Top 1 Million revealed 466 hosts with corrupted Allow headers, so it seems the impact is not huge so far.

The bug appears if a webmaster tries to use the “Limit” directive with an invalid HTTP method. We decided to test this behaviour with a simple .htaccess file like this:

Continue reading “OptionsBleed – Apache Bleeds In Uncommon Configuration”