If you have more than a few bank cards, door-entry keycodes, or other small numeric passwords to remember, it eventually gets to be a hassle. The worst, for me, is a bank card for a business account that I use once in a blue moon. I probably used it eight times in five years, and then they gave me a new card with a new PIN. Sigh.
How would a normal person cope with a proliferation of PINs? They’d write down the numbers on a piece of paper and keep it in their wallet. We all know how that ends, right? A lost wallet and multiple empty bank accounts. How would a hacker handle it? Write each number down on the card itself, but encrypted, naturally, with the only unbreakable encryption scheme there is out there: the one-time pad (OTP).
The OTP is an odd duck among encryption methods. They’re meant to be decrypted in your head, but as long as the secret key remains safe, they’re rock solid. If you’ve ever tried to code up the s-boxes and all that adding, shifting, and mixing that goes on with a normal encryption method, OTPs are refreshingly simple. The tradeoff is a “long” key, but an OTP is absolutely perfect for encrypting your PINs.
The first part of this article appears to be the friendly “life-hack” pablum that you’ll get elsewhere, but don’t despair, it’s also a back-door introduction to the OTP. The second half dives into the one-time pad with some deep crypto intuition, some friendly math, and hopefully a convincing argument that writing down your encrypted PINs is the right thing to do. Along the way, I list the three things you can do wrong when implementing an OTP. (And none of them will shock you!) But in the end, my PIN encryption solution will break one of the three, and remain nonetheless sound. Curious yet? Read on.
Here’s a puzzler for you: how do you securely send data from one airgapped computer to another? Sending it over a network is right out, because that’s the entire point of an airgap. A sneakernet is inherently insecure, and you shouldn’t overestimate the security of a station wagon filled with tapes. For his Hackaday Prize entry, [Nick Sayer] has a possible solution. It’s the Sankara Stones from Indiana Jones and the Temple of Doom, or a USB card reader that requires two cards. Either way, it’s an interesting experiment in physical security for data.
The idea behind the Orthrus, a secure RAID USB storage device for two SD cards, is to pair two SD cards. With both cards, you can read and write to this RAID drive without restriction. With only one, the data is irretrievable so they are safe during transit if shipped separately.
The design for this device is based around the ATXMega32A4U. It’s pretty much what you would expect from an ATMega, but this has a built-in full speed USB interface and hardware AES support. The USB is great for presenting two SD cards as a single drive, and the AES port is used to encrypt the data with a key that is stored in a key storage block on each card.
For the intended use case, it’s a good design. You can only get the data off of these SD cards if you have both of them. However, [Nick] is well aware of Schneier’s Law — anyone can design a cryptosystem that they themselves can’t break. That’s why he’s looking for volunteers to crack the Orthrus. It’s an interesting challenge, and one we’d love to see broken.
This may not seem like a big deal but image a scenario where an attacker intentionally builds invisible defects into thousands of cars without the manufacturer even knowing. Just about everything in a car these days is built using robotic arms. The Chassis could be built too weak, the engine could be built with weaknesses that will fail far before the expected lifespan. Even your brake disks could have manufacturing defects introduced by a computer hacker causing them to shatter under heavy braking. The Forward-looking Threat Research (FTR) team decided to check the feasibility of such attacks and what they found was shocking. Tests were performed in a laboratory with a real in work robot. They managed to come up with five different attack methods.
Attack 1: Altering the Controller’s Parameters
The attacker alters the control system so the robot moves unexpectedly or inaccurately, at the attacker’s will.
Why are these robots even connected? As automated factories become more complex it becomes a much larger task to maintain all of the systems. The industry is moving toward more connectivity to monitor the performance of all machines on the factory floor, tracking their service lifetime and alerting when preventive maintenance is necessary. This sounds great for its intended use, but as with all connected devices there are vulnerabilities introduced because of this connectivity. This becomes especially concerning when you consider the reality that often equipment that goes into service simply doesn’t get crucial security updates for any number of reasons (ignorance, constant use, etc.).
For the rest of the attack vectors and more detailed info you should refer to the report (PDF) which is quite an interesting read. The video below also shows insight into how these type of attacks might affect the manufacturing process.
Betteridge’s Law of Headlines states, “Any headline that ends in a question mark can be answered by the word no.” This law remains unassailable. However, recent claims have called into question a black box hidden deep inside every Intel chipset produced in the last decade.
Yesterday, on the Semiaccurate blog, [Charlie Demerjian] announced a remote exploit for the Intel Management Engine (ME). This exploit covers every Intel platform with Active Management Technology (AMT) shipped since 2008. This is a small percentage of all systems running Intel chipsets, and even then the remote exploit will only work if AMT is enabled. [Demerjian] also announced the existence of a local exploit.
Intel’s ME and AMT Explained
Beginning in 2005, Intel began including Active Management Technology in Ethernet controllers. This system is effectively a firewall and a tool used for provisioning laptops and desktops in a corporate environment. In 2008, a new coprocessor — the Management Engine — was added. This management engine is a processor connected to every peripheral in a system. The ME has complete access to all of a computer’s memory, network connections, and every peripheral connected to a computer. The ME runs when the computer is hibernating and can intercept TCP/IP traffic. Management Engine can be used to boot a computer over a network, install a new OS, and can disable a PC if it fails to check into a server at some predetermined interval. From a security standpoint, if you own the Management Engine, you own the computer and all data contained within.
The Management Engine and Active Management Technolgy has become a focus of security researchers. The researcher who finds an exploit allowing an attacker access to the ME will become the greatest researcher of the decade. When this exploit is discovered, a billion dollars in Intel stock will evaporate. Fortunately, or unfortunately, depending on how you look at it, the Managment Engine is a closely guarded secret, it’s based on a strange architecture, and the on-chip ROM for the ME is a black box. Nothing short of corporate espionage or looking at the pattern of bits in the silicon will tell you anything. Intel’s Management Engine and Active Management Technolgy is secure through obscurity, yes, but so far it’s been secure for a decade while being a target for the best researchers on the planet.
In yesterday’s blog post, [Demerjian] reported the existence of two exploits. The first is a remotely exploitable security hole in the ME firmware. This exploit affects every Intel chipset made in the last ten years with Active Management Technology on board and enabled. It is important to note this remote exploit only affects a small percentage of total systems.
The second exploit reported by the Semiaccurate blog is a local exploit that does not require AMT to be active but does require Intel’s Local Manageability Service (LMS) to be running. This is simply another way that physical access equals root access. From the few details [Demerjian] shared, the local exploit affects a decade’s worth of Intel chipsets, but not remotely. This is simply another evil maid scenario.
Should You Worry?
The biggest network security threat today is a remote code execution exploit for Intel’s Management Engine. Every computer with an Intel chipset produced in the last decade would be vulnerable to this exploit, and RCE would give an attacker full control over every aspect of a system. If you want a metaphor, we are dinosaurs and an Intel ME exploit is an asteroid hurtling towards the Yucatán peninsula.
However, [Demerjian] gives no details of the exploit (rightly so), and Intel has released an advisory stating, “This vulnerability does not exist on Intel-based consumer PCs.” According to Intel, this exploit will only affect Intel systems that ship with AMT, and have AMT enabled. The local exploit only works if a system is running Intel’s LMS.
This exploit — no matter what it may be, as there is no proof of concept yet — only works if you’re using Intel’s Management Engine and Active Management Technology as intended. That is, if an IT guru can reinstall Windows on your laptop remotely, this exploit applies to you. If you’ve never heard of this capability, you’re probably fine.
Still, with an exploit of such magnitude, it’s wise to check for patches for your system. If your system does not have Active Management Technology, you’re fine. If your system does have AMT, but you’ve never turned it on, you’re fine. If you’re not running LMT, you’re fine. Intel’s ME can be neutralized if you’re using a sufficiently old chipset. This isn’t the end of the world, but it does give security experts panning Intel’s technology for the last few years the opportunity to say, ‘told ‘ya so’.
[Yingtao Zeng], [Qing Yang], and [Jun Li], a.k.a. the [UnicornTeam], developed the cheapest way so far to hack a passive keyless entry system, as found on some cars: around $22 in parts, give or take a buck. But that’s not all, they manage to increase the previous known effective range of this type of attack from 100 m to around 320 m. They gave a talk at HITB Amsterdam, a couple of weeks ago, and shown their results.
The attack in its essence is not new, and it’s basically just creating a range extender for the keyfob. One radio stays near the car, the other near the car key, and the two radios relay the signals coming from the car to the keyfob and vice-versa. This version of the hack stands out in that the [UnicornTeam] reverse engineered and decoded the keyless entry system signals, produced by NXP, so they can send the decoded signals via any channel of their choice. The only constraint, from what we could tell, it’s the transmission timeout. It all has to happen within 27 ms. You could almost pull this off over Internet instead of radio.
A suggested fix from the researchers is to decrease this 27 ms timeout. If it is short enough, at least the distance for these types of attacks is reduced. Even if that could eventually mitigate or reduce the impact of an attack on new cars, old cars are still at risk. We suggest that the passive keyless system is broken from the get-go: allowing the keyfob to open and start your car without any user interaction is asking for it. Are car drivers really so lazy that they can’t press a button to unlock their car? Anyway, if you’re stuck with one of these systems, it looks like the only sure fallback is the tinfoil hat. For the keyfob, of course.
[Wikileaks] has just published the CIA’s engineering notes for Weeping Angel Samsung TV Exploit. This dump includes information for field agents on how to exploit the Samsung’s F-series TVs, turning them into remotely controlled spy microphones that can send audio back to their HQ.
An attacker needs physical access to exploit the Smart TV, because they need to insert a USB drive and press keys on the remote to update the firmware, so this isn’t something that you’re likely to suffer personally. The exploit works by pretending to turn off the TV when the user puts the TV into standby. In reality, it’s sitting there recording all the audio it can, and then sending it back to the attacker once it comes out of “fake off mode”.
It is still unclear if this type of vulnerability could be fully patched without a product recall, although firmware version 1118+ eliminates the USB installation method.
The hack comes along with a few bugs that most people probably wouldn’t notice, but we are willing to bet that your average Hackaday reader would. For instance, a blue LED stays on during “fake off mode” and the Samsung and SmartHub logos don’t appear when you turn the TV back on. The leaked document is from 2014, though, so maybe they’ve “fixed” them by now.
Do you own a Samsung F-series TV? If you do, we wouldn’t worry too much about it unless you are tailed by spies on a regular basis. Don’t trust the TV repairman!
We will all be used to malicious software, computers and operating systems compromised by viruses, worms, or Trojans. It has become a fact of life, and a whole industry of virus checking software exists to help users defend against it.
Underlying our concerns about malicious software is an assumption that the hardware is inviolate, the computer itself can not be inherently compromised. It’s a false one though, as it is perfectly possible for a processor or other integrated circuit to have a malicious function included in its fabrication. You might think that such functions would not be included by a reputable chip manufacturer, and you’d be right. Unfortunately though because the high cost of chip fabrication means that the semiconductor industry is a web of third-party fabrication houses, there are many opportunities during which extra components can be inserted before the chips are manufactured. University of Michigan researchers have produced a paper on the subject (PDF) detailing a particularly clever attack on a processor that minimizes the number of components required through clever use of a FET gate in a capacitive charge pump.
On-chip backdoors have to be physically stealthy, difficult to trigger accidentally, and easy to trigger by those in the know. Their designers will find a line that changes logic state rarely, and enact a counter on it such that when they trigger it to change state a certain number of times that would never happen accidentally, the exploit is triggered. In the past these counters have been traditional logic circuitry, an effective approach but one that leaves a significant footprint of extra components on the chip for which space must be found, and which can become obvious when the chip is inspected through a microscope.
The University of Michigan backdoor is not a counter but an analog charge pump. Every time its input is toggled, a small amount of charge is stored on the capacitor formed by the gate of a transistor, and eventually its voltage reaches a logic level such that an attack circuit can be triggered. They attached it to the divide-by-zero flag line of an OR1200 open-source processor, from which they could easily trigger it by repeatedly dividing by zero. The beauty of this circuit is both that it uses very few components so can hide more easily, and that the charge leaks away with time so it can not persist in a state likely to be accidentally triggered.
The best hardware hacks are those that are simple, novel, and push a device into doing something it would not otherwise have done. This one has all that, for which we take our hats off to the Michigan team.