36C3: Open Source Is Insufficient To Solve Trust Problems In Hardware

With open source software, we’ve grown accustomed to a certain level of trust that whatever we are running on our computers is what we expect it to actually be. Thanks to hashing and public key signatures in various parts in the development and deployment cycle, it’s hard for a third party to modify source code or executables without us being easily able to spot it, even if it travels through untrustworthy channels.

Unfortunately, when it comes to open source hardware, the number of steps and parties involved that are out of our control until we have a final product — production, logistics, distribution, even the customer — makes it substantially more difficult to achieve the same peace of mind. To make things worse, to actually validate the hardware on chip level, you’d ultimately have to destroy it.

On his talk this year at the 36C3, [bunnie] showed a detailed insight of several attack vectors we could face during manufacturing. Skipping the obvious ones like adding or substituting components, he’s focusing on highly ambitious and hard to detect modifications inside an IC’s package with wirebonded or through-silicon via (TSV) implants, down to modifying the netlist or mask of the integrated circuit itself. And these aren’t any theoretical or “what if” scenarios, but actual possible options — of course, some of them come with a certain price tag, but in the end, with the right motivation, money is only a detail.

Continue reading “36C3: Open Source Is Insufficient To Solve Trust Problems In Hardware”

Amazon Ring: Neighbors Leaking Data On Neighbors

For a while now a series of stories have been circulating about Amazon’s Ring doorbell, an Internet-connected camera and entry system that lets users monitor and even interact with visitors and delivery people at their doors. The adverts feature improbable encounters with would-be crooks foiled by the IoT-equipped homeowner, but the stories reveal a much darker side. From reports of unhindered access by law enforcement to privately-held devices through mass releases of compromised Ring account details to attackers gaining access to children via compromised cameras, it’s fair to say that there’s much to be concerned about.

One cause for concern has been the location data exposed by the associated Amazon Neighbors crowd-sourced local crime paranoia app, and for those of us who don’t live and breathe information security there is an easy-to-understand Twitter breakdown of its vulnerabilities from [Elliot Alderson] that starts with the app itself and proceeds from there into compromising Ring accounts by finding their passwords. We find that supposedly anonymized information in the app sits atop an API response with full details, that there’s no defense against brute-forcing a Ring password, and that a tasty list of API and staging URLs is there for all to see embedded within the app. Given all that information, there’s little wonder that the system has proven to be so vulnerable.

As traditional appliance makers have struggled with bringing Internet connectivity into their products there have been a few stories of woeful security baked into millions of homes. A defense could be made that a company with roots outside the Internet can be forgiven for such a gaffe, but in the case of Amazon whose history has followed that of mass Web adoption and whose infrastructure lies behind so much of the services we trust, this level of lax security is unforgivable. Hackaday readers will be aware of the security issues behind so-called “smart” devices, but to the vast majority of customers they are simply technological wonders that are finally delivering a Jetsons-style future. If some good comes of these Ring stories it might be that those consumers finally begin to wake up to IoT security, and use their new-found knowledge to demand better.

Header image: Ring [CC BY-SA 4.0]

Kerry Scharfglass Secures Your IoT Things

We’ve all seen the IoT device security trainwrecks: those gadgets that fail so spectacularly that the comment section lights up with calls of “were they even thinking about the most basic security?” No, they probably weren’t. Are you?

Hackaday Contributor and all around good guy Kerry Scharfglass thinks about basic security for a living, and his talk is pitched at the newcomer to device security. (Embedded below.) Of course “security” isn’t a one-size-fits-all proposition; you need to think about what threats you’re worried about, which you can ignore, and defend against what matters. But if you’ve never worked through such an exercise, you’re in for a treat here. You need to think like a maker, think like a breaker, and surprisingly, think like an accountant in defining what constitutes acceptable risks. Continue reading “Kerry Scharfglass Secures Your IoT Things”

This Week In Security: Unicode, Truecrypt, And NPM Vulnerabilities

Unicode, the wonderful extension to to ASCII that gives us gems like “✈”, “⌨”, and “☕”, has had some unexpected security ramifications. The most common problems with Unicode are visual security issues, like character confusion between letters. For example, the English “M” (U+004D) is indistinguishable from the Cyrillic “М” (U+041C). Can you tell the difference between IBM.com and IBМ.com?

This bug, discovered by [John Gracey] turns the common problem on its head. Properly referred to as a case mapping collision, it’s the story of different Unicode characters getting mapped to the same upper or lowercase equivalent.

'ß'.toLowerCase() === 'SS'.toLowerCase() // true
// Note the Turkish dotless i
'John@Gıthub.com'.toUpperCase() === 'John@Github.com'.toUpperCase()

GitHub stores all email addresses in their lowercase form. When a user sends a password reset, GitHub’s logic worked like this: Take the email address that requested a password reset, convert to lower case, and look up the account that uses the converted email address. That by itself wouldn’t be a problem, but the reset is then sent to the email address that was requested, not the one on file. In retrospect, this is an obvious flaw, but without the presence of Unicode and the possibility of a case mapping collision, would be a perfectly safe practice.

This flaw seems to have been fixed quite some time ago, but was only recently disclosed. It’s also a novel problem affecting Unicode that we haven’t covered. Interestingly, my research has turned up an almost identical problem at Spotify, back in 2013.
Continue reading “This Week In Security: Unicode, Truecrypt, And NPM Vulnerabilities”

This Week In Security: VPNs, Patch Tuesday, And Plundervault

An issue in Unix virtual private networks was disclosed recently, where an attacker could potentially hijack a TCP stream, even though that stream is inside the VPN. This attack affects OpenVPN, Wireguard, and even IPSec VPNs. How was this possible? Unix systems support all manner of different network scenarios, and oftentimes a misconfiguration can lead to problems. Here, packets sent to the VPNs IP address are processed and responded to, even though they are coming in over a different interface.

The attack initially sounds implausible, as an attacker has to know the Virtual IP address of the VPN client, the remote IP address of an active TCP connection, and the sequence and ACK numbers of that connection. That’s a lot of information, but an attacker can figure it out one piece at a time, making it a plausible attack. Continue reading “This Week In Security: VPNs, Patch Tuesday, And Plundervault”

Generating Random Numbers With A Fish Tank

While working towards his Computing and Information Systems degree at the University of London, [Jason Fenech] submitted an interesting proposal for generating random numbers using nothing more exotic than an aquarium and a sufficiently high resolution camera. Not only does his BubbleRNG make a rather relaxing sound while in operation, but according to tools such as ENT, NIST-STS, and DieHard, appears to be a source of true randomness.

If you want to build your own BubbleRNG, all you need is a tank of water and some air pumps to generate the bubbles. A webcam looking down on the surface of the water captures the chaos that ensues when the columns of bubbles generated by each pump collide. In the video after the break [Jason] uses two pumps, but considering they’re cheaper than lava lamps, we’d probably chuck a few more into the mix. To be on the safe side, he mentions that the placement and number of pumps should be arbitrary and not repeated on subsequent installations.

To turn this tiny maelstrom into a source of random numbers, OpenCV is first used to identify the bubbles in the video stream that are between a user-supplied minimum and maximum radius. The software then captures the X and Y coordinates of each bubble, and the resulting values are shuffled around and XOR’d until a stream of random numbers comes out the other end. What you do with this cheap source of infinite improbability is, of course, up to you.

While this project has been floating around (no pun intended) the Internet for a few years now, it seems to have gone largely overlooked, and was only just brought to our attention thanks to a tip from one of our illustrious readers. An excellent reminder that if you see something interesting out there, we’d love to hear about it.

Continue reading “Generating Random Numbers With A Fish Tank”

This Week In Security: Tegra Bootjacking, Leaking SSH, And StrandHogg

CVE-2019-5700 is a vulnerability in the Nvidia Tegra bootloader, discovered by [Ryan Grachek], and breaking first here at Hackaday. To understand the vulnerability, one first has to understand a bit about the Tegra boot process. When the device is powered on, a irom firmware loads the next stage of the boot process from the device’s flash memory, and validates the signature on that binary. As an aside, we’ve covered a similar vulnerability in that irom code called selfblow.

On Tegra T4 devices, irom loads a single bootloader.bin, which in turn boots the system image. The K1 boot stack uses an additional bootloader stage, nvtboot, which loads the secure OS kernel before handing control to bootloader.bin. Later devices add additional stages, but that isn’t important for understanding this. The vulnerability uses an Android boot image, and the magic happens in the header. Part of this boot image is an optional second stage bootloader, which is very rarely used in practice. The header of this boot image specifies the size in bytes of each element, as well as what memory location to load that element to. What [Ryan] realized is that while it’s usually ignored, the information about the second stage bootloader is honored by the official Nvidia bootloader.bin, but neither the size nor memory location are sanity checked. The images are copied to their final position before the cryptographic verification happens. As a result, an Android image can overwrite the running bootloader code. Continue reading “This Week In Security: Tegra Bootjacking, Leaking SSH, And StrandHogg”