Most readers will be familiar with the ESP32, Espressif’s dual-core processor with integrated WiFi and Bluetooth. Few of us though will have explored all of its features, including its built-in encryption facilities and secure booting capability. With these, a developer can protect and secure their code, and keep their devices secure.
That sense of security may now be illusory though, thanks to [LimitedResults] who has developed a series of attacks on the chip that compromise its crypto core, secure boot, and flash encryption. This enables both the chance of arbitrary code execution and firmware extraction on locked-down ESP32 devices.
To achieve all this he used a glitching technique on the device’s power supply, inserting a carefully timed glitch in the rail to coincide with a particular instruction being executed. For those of us who are not experts in this technique, he provides a basic primer with a description of his home-made glitcher made using a CMOS switch chip.
It appears that there is no solution to this attack short of new silicon, however, it should be borne in mind that it’s something that depends upon a specialist hacker with a well-equipped bench, and is thus only likely to be a significant headache to manufacturers. But it undermines a key feature of a major line of microcontrollers, and as such it remains a significant piece of work.
“To achieve all this he used a glitching technique on the device’s power supply, inserting a carefully timed glitch in the rail to coincide with a particular instruction being executed.”
Fine for hardware under hackers control. Remote glitching may be harder.
Physical security is so important. If you can lay hands on it, it’s yours.
Thanks for posting an update :-) Looking forward to comments too,
Cheers
With physical access, talent, determination, and money, most “security” goes out the window.
pick any 2 out of the 4, unfortunately
Skimming LimitedResults pages quick, it’s not standing out at me which particular instruction it is, just wondering if you rewrite affected libs and routines with paranoia code that avoids using it, whether the attack is blunted. Would make everything slower I assume having to use 10 instructions or so to get around that one.
That’s the beauty of HW glitch pwning. You really don’t know which instruction you’re “disturbing” with the power glitch (or clock glitch, or light glitch on a decapsulated chip) and you don’t even care to know. You only know that an instruction part of the boot ROM code is being partially executed or altered. The exact content of the boot code is unknown and this code is probably on masked ROM, so all ESP32 are affected until this is fixed in silicon (if ever fixed). Rewriting the code is quite useless because any implementation will fail once the right timing for a glitch is found again. Instead the silicon needs to be modified to prevent a power glitch, and then to prevent a clock glitch, then to shield the die against a physical attack, then to save the keys on battery backuped RAM, etc. I doubt it make any sense for EspressIf to invest so much money down that road (making the ESP32 more complex and expensive). Even for companies that use the ESP32 in their products the impact of reading/modifying/cloning their firmware on the ESP32 is limited, at least for those products that rely on a Cloud back-end to work. If the HW and firmware are cloned, the back-end can be modified to detect and reject clones.
It’s not necessarily about clones… Key extraction provides for multiple angles of attack.
If you have universal keys on a physical device you’re selling, you’re already screwed. If it only takes one being hacked for them all to be hacked, you’re just asking for trouble.
Thanks for the explain. Yes I was assuming by article wording that the instruction was known, so right, just has to be reproducible certain type of glitch catches CPU in that state at that time each time and does this thing which allows pwnage.
So now wondering if you made your widgets with enough capacitance between power planes on PCB to ride out external glitches and maybe epoxied down chip in such a fashion that it would be super super hard to get it off without destroying it, whether that would mitigate things.
You’re right a blob of epoxy is probably the most cost effective way to mitigate this until Espressif eventually takes action. Of course it will not stop a motivated opponent … worst case the ESP32 can be still exposed from its bottom by drilling the PCB.
Was there a need to remove the caps at all since eventually the CPU VDD was severed from the system VDD? And this exploit wouldn’t be possible if the CPU VDD also had a Brownout Detector like the RTC VDD (assuming the programmers of the target ESP32 enabled the use of said BOD).
“however, it should be borne in mind that it’s something that depends upon a specialist hacker with a well-equipped bench”
Of, this carefully avoids the question of why someone is willing to work so hard to ‘hack into’ by wireless weather station on my back yard?
What application are these ESP32 chips being used in where this is really a problem? Banking? Web hosting? Medical device control? Missile control systems?
As previous commenter noted, once an attacker has physical access to a device most security measures fail.
i can think of at least two cases where this kind of attack could matter.
you build a fancy doodad, thousands of hours of work…someone clones your PCB using visual inspection and then clones your firmware, and now they’re undercutting your sales.
the other is, you build a million doodads, or a billion, and someone rips the firmware and now they know your default password, or they find the hidden vulnerability, and all billion of them are now vulnerable.
On the other side of the coin, someone makes a billion doodads, decides to never make an update for it, it’s broken, they pull the plug and move on. Leaving you with a shitty doodad that will end up as landfill.
At least then, hackers (and perhaps later the general public) can pick up where the manufacturer left.
I hear ya, bit of a two edged sword though, if it’s too insecure there won’t be billions of doodads to hack for cheap because manufacturers won’t use it, if it’s too secure, they may cost more and won’t be as hackable.
It’s funny how poor security seems to wind up exposing evil or preventing waste just as often as it actually breaches innocent people’s data
Well, suppose someone grabs the seemingly harmless IoT device that was connected to your network, does this hack, and extracts not only your SSID and password, but credentials for any servers or services the device had access to.
Revoke the keys when the device times out
What microscope is that?
+1 ???
That is 1080p HDMI camera with ?0x zoom lens and a ring light.
They start looking like this before getting fancy>
https://www.aliexpress.com/item/4000104702543.html
So what? How is this even an issue?
For a hobbyist it isn’t. For a manufacturer probably it isn’t because your firmware is probably not that valuable, and you’re probably using public keys if security matters anyway.
If you have some super trade secret tech on there you might care, and the hack is definitely academically interesting.
Very interesting stuff, and certainly a consideration for many doing commercial development.
But I dunno, I’d be flattered if anyone considered one of my little ESP projects worthy of such efforts. For a steak dinner, I’d probably hand over the commented source code and a programmed device.
Agreeing with Elliott Williams – the key is that you program IoT devices like these to do the absolute minimum required to do a simple job, to minimise harm if they fail or get hacked.
You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious.
Gliching was, and probably still is the major attack vector for the cards holding the keys for encrypted satellite TV. You can’t really do something that’s totally glitch proff. Even destructive techniques that destroy the chip or erase the keys can eventually be bypassed. On the other hand it is not probably that someone can come up with a pocket gadget that will to do something nastier than a crash. The solution here is to think very well about what you trust to this devices, it can be your info, it can be your money but one day it can be your life.
Could using a dedicated crypto accelerator IC mitigate this? Just from superficial knowledge but I’m thinking about something like the ATECC608A with both key storage and built in functionality for hashing, encrypting, and signing all in hardened silicon that’d be theoretically way more difficult to exploit than whatever cut-corners implementations are going on here
The “cut-corners implementations” you refer to are the carefully designed and free security implementations of the ESP32 chip itself.
How likely is it that this could be a jumping off point to develop a similar attack against the latest ARM or Apple processors?
Seeing as this attack requires physical access to the device I’d expect for a malicious actor to use this on a mobile device of some kind? I don’t think there are any portable devices using this chip are there? I quickly Googled it and couldn’t find any off hand.
I can see how if this could be adapted to attack military gear or hardened secure devices it could get real bad, real fast.
I think you’re totally missing the point about what glitching is. It is not a device, it’s a technique, it’s how you can make something (in this context a chip) malfunction in a way that it does what you want it to do instead of what it should do. You wont see your average joe hacker using glitching to extract some private keys from a secure chip… Unless the chip that should be secure, in fact, is not. Glitching requires insane levels of knowledge and technical expertise so the prize must be good enough to pay for that. We’re are talking about things like bypass the control chip of a nuclear missile, not make an ATM give a few millions away…