This Week In Security: TPM And BootGuard, Drones, And Coverups

Full disk encryption is the go-to solution for hardening a laptop against the worst-case scenario of physical access. One way that encryption can be managed is through a Trusted Platform Module (TPM), a chip on the motherboard that manages the disk encryption key, and only hands it over for boot after the user has authenticated. We’ve seen some clever tricks deployed against these discrete TPMs, like sniffing the data going over the physical traces. So in theory, an integrated TPM might be more secure. Such a technique does exist, going by the name fTPM, or firmware TPM. It uses a Trusted Execution Environment, a TEE, to store and run the TPM code. And there’s another clever attack against that concept (PDF).

It’s chip glitching via a voltage fault. This particular attack works against AMD processors, and the voltage fault is triggered by injecting commands into the Serial Voltage Identification Interface 2.0 (SVI2). Dropping the voltage momentarily to the AMD Secure Processor (AMD-SP) can cause a key verification step to succeed even against an untrusted key, bypassing the need for an AMD Root Key (ARK) signed board firmware. That’s not a simple process, and pulling it off takes about $200 of gear, and about 3 hours. This exposes the CPU-unique seed, the board NVRAM, and all the protected TPM objects.

So how bad is this in the real world? If your disk encryption only relies on an fTPM, it’s pretty bad. The attack exposes that key and breaks encryption. For something like BitLocker that can also use a PIN, it’s a bit better, though to really offer more resistance, that needs to be a really long PIN: a 10 digit PIN falls to a GPU in just 4 minutes, in this scenario where it can be attacked offline. There is an obscure way to enable an “enhanced PIN”, a password, which makes that offline attack impractical with a secure password.

And if hardware glitching a computer seems to complicated, why not just use the leaked MSI keys? Now to be fair, this only seems to allow a bypass of Intel’s BootGuard, but it’s still a blow. MSI suffered a ransomware-style breach in March, but rather than encrypt data, the attackers simply threatened to release the copied data to the world. MSI apparently refused to pay up, and source code and signing keys are now floating in the dark corners of the Internet. There have been suggestions that this leak impacts the entire line of Intel processors, but it seems likely that MSI only had their own signing keys to lose. But that’s plenty bad, given the lack of a revocation system or automatic update procedure for MSI firmware. Continue reading “This Week In Security: TPM And BootGuard, Drones, And Coverups”

This Week In Security: Oracle Opera, Passkeys, And AirTag RFC

There’s a problem with Opera. No, not that kind of opera. The Oracle kind. Oracle OPERA is a Property Management Solution (PMS) that is in use in a bunch of big-name hotels around the world. The PMS is the system that handles reservations and check-ins, talks to the phone system to put room extensions in the proper state, and generally runs the back-end of the property. It’s old code, and handles a bunch of tasks. And researchers at Assetnote found a serious vulnerability. CVE-2023-21932 is an arbitrary file upload issue, and rates at least a 7.2 CVSS.

It’s a tricky one, where the code does all the right things, but gets the steps out of order. Two parameters, jndiname and username are encrypted for transport, and the sanitization step happens before decryption. The username parameter receives no further sanitization, and is vulnerable to path traversal injection. There are two restrictions to exploitation. The string encryption has to be valid, and the request has to include a valid Java Naming and Directory Interface (JNDI) name. It looks like these are the issues leading Oracle to consider this flaw “difficult to exploit vulnerability allows high privileged attacker…”.

The only problem is that the encryption key is global and static. It was pretty straightforward to reverse engineer the encryption routine. And JDNI strings can be fetched anonymously from a trio of endpoints. This lead Assetnote to conclude that Oracle’s understanding of the flaw is faulty, and a much higher CVSS score is appropriate. Particularly with this Proof of Concept code, it is relatively straightforward to upload a web shell to an Opera system.

The one caveat there is that an attacker has to get network access to that install. These aren’t systems intended to be exposed to the internet, and my experience is that they are always on a dedicated network connection, not connected to the rest of the office network. Even the interconnect between the PMS and phone system is done via a serial connection, making this network flaw particularly hard to get to. Continue reading “This Week In Security: Oracle Opera, Passkeys, And AirTag RFC”

Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys

An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.

Researchers at the University of Glasgow show how machine learning can pull details from thermal images like these quickly and effectively.

Touching a key with a fingertip imparts a slight amount of body heat, and that small amount of heat can be spotted by a thermal sensor. We’ve seen this basic approach used since at least 2005, and two things have changed since then: thermal cameras gotten much more common, and researchers discovered that by combining thermal readings with machine learning, it’s possible to eke out slight details too difficult or subtle to spot by human eye and judgement alone.

Here’s a link to the research and findings from the University of Glasgow, which shows how even a 16 symbol password can be attacked with an average accuracy of 55%. Shorter passwords are much easier to decipher, with the system attacking 6 and 8 symbol passwords with an accuracy between 92% and 80%, respectively. In the study, thermal readings were taken up to a full minute after the password was entered, but sooner readings result in higher accuracy.

A few things make things harder for the system. Fast typists spend less time touching keys, and therefore transfer less heat when they do, making things a little more challenging. Interestingly, the material of the keycaps plays a large role. ABS keycaps retain heat far more effectively than PBT (a material we often see in custom keyboard builds like this one.) It also turns out that the tiny amount of heat from LEDs in backlit keyboards runs effective interference when it comes to thermal readings.

Amusingly this kind of highly modern attack would be entirely useless against a scramblepad. Scramblepads are vintage devices that mix up which numbers go with which buttons each time the pad is used. Thermal imaging and machine learning would be able to tell which buttons were pressed and in what order, but that still wouldn’t help! A reminder that when it comes to security, tech does matter but fundamentals can matter more.

This Week In Security: Session Puzzling, Session Keys, And Speculation

Last week we briefly mentioned a vulnerability in the Papercut software, and more details and a proof of concept have been published. The vulnerability is one known as session puzzling. That’s essentially where a session variable is used for multiple purposes, or gets incorrectly set. In Papercut, it was possible to trigger the SetupCompleted class on a server that had already finished that initial setup process. And part of SetupCompleted validated the session of the current user. In a normal first-setup case, that might make sense, but as anyone could trigger that code, it allowed anonymous users to jump straight to admin.

The other half of the exploit leverages the “print script” feature, which lets admins write code that runs on printing. A simple java.lang.Runtime.getRuntime().exec('calc.exe'); does the trick to jump from web interface to remote code execution. The indicators of compromise are reasonable generic, including User "admin" logged into the administration interface. and Admin user "admin" modified the print script on printer "".. A Shodan search turns up around 1,700 Papercut servers accessible from the Internet, which prompts the painfully obvious observation that your internal print auditing solution’s web interface definitely should not be exposed online.

Apache Superset

Superset is a nifty data visualization tool for showing charts, graphs, and all sorts of pretty data sets on a dashboard. It also has some weirdness with using web sessions for user management. The session is stored on the user side in a cookie, signed with a secret key. This works great, unless the key used is particularly weak. And guess what, the default configuration of Superset uses a pre-populated secret key. thisismysecretkey is arguably a bad key to start with, but it turns out it’s also shared by more than 70% of the accessible Superset servers.

Continue reading “This Week In Security: Session Puzzling, Session Keys, And Speculation”

This Week In Security: Spandex Tempest, Supply Chain Chain, And NTP

Microsoft’s Threat Intelligence group has announced a new naming scheme for threat actors. It sounds great, naming groups after weather phenomenon, based on the groups motivations or nation of origin. Then each discreet group is given an additional adjective. That’s where things get interesting.

It seems like the adjectives were chosen at random, giving rise for some suitably impressive names, like Ghost Blizzard, Ruby Sleet, or Granite Typhoon. Some of the other names sound like they should be desserts: Caramel Tsunami, Peach Sandstorm, Aqua Blizzard, or Raspberry Typhoon. But then there the really special names, like Wine Tempest and Zigzag Hail. But the absolute winner is Spandex Tempest. No word yet on whether researchers managed to keep a straight face when approving that name.

Chrome 0-day Double

A pair of Chrome browser releases have been minted in the past week, both to address vulnerabilities that are actively being exploited. Up first was CVE-2022-2033, type confusion in the V8 JS engine. That flaw was reported by Google’s Threat Analysis Group, presumably discovered in the wild, and the fix was pushed as stable on the 14th.

Then, on th 18th, yet another released rolled out to fix CVE-2023-2136, also reported by the TAG, also being exploited in the wild. It seems likely that both of these 0-days were found in the same exploitation campaign. We look forward to hearing the details on this one. Continue reading “This Week In Security: Spandex Tempest, Supply Chain Chain, And NTP”

Circumvent Facial Recognition With Yarn

Knitwear can protect you from a winter chill, but what if it could keep you safe from the prying eyes of Big Brother as well? [Ottilia Westerlund] decided to put her knitting skills to the test for this anti-surveillance sweater.

[Westerlund] explains that “yarn is a programable material” containing FOR loops and other similar programming concepts transmitted as knitting patterns. In the video (after the break) she also explores the history of knitting in espionage using steganography embedded in socks and other knitwear to pass intelligence in unobtrusive ways. This lead to the restriction of shipping handmade knit goods in WWII by the UK government.

Back in the modern day, [Westerlund] took the Hyperface pattern developed by the Adam Harvey and turned it into a knitting pattern. Designed to circumvent detection by Viola-Jones based facial detection systems, the pattern presents a computer vision system with a number of “faces” to distract it from covered human faces in an image. While the knitted jumper (sweater for us Americans) can confuse certain face detection systems, [Westerlund] crushes our hope of a fuzzy revolution by saying that it is unsuccessful against the increasingly prevalent neural network-based facial detection systems creeping on our day-to-day activities.

The knitting pattern is available if you want to try your hands at it, but [Westerlund] warns it’s a bit of a pain to actually implement. If you want to try knitting and tech mashup, check out this knitting clock or this software to turn 3D models into knitting patterns.

Continue reading “Circumvent Facial Recognition With Yarn”

Sufficiently Advanced Tech: Has Bugs

Arthur C. Clarke said that “Any sufficiently advanced technology is indistinguishable from magic”. He was a sci-fi writer, though, and not a security guy. Maybe it should read “Any sufficiently advanced tech has security flaws”. Because this is the story of breaking into a car through its headlight.

In a marvelous writeup, half-story, half CAN-bus masterclass, [Ken Tindell] details how car thieves pried off the front headlight of a friend’s Toyota, and managed to steal it just by saying the right things into the network. Since the headlight is on the same network as the door locks, pulling out the bulb and sending the “open the door” message repeatedly, along with a lot of other commands to essentially jam some other security features, can pull it off.

Half of you are asking what this has to do with Arthur C. Clarke, and the other half are probably asking what a lightbulb is doing on a car’s data network. In principle, it’s a great idea to have all of the electronics in a car be smart electronics, reporting their status back to the central computer. It’s how we know when our lights are out, or what our tire pressure is, from the driver’s seat. But adding features adds attack surfaces. What seems like magic to the driver looks like a gold mine to the attacker, or to car thieves.

With automotive CAN, security was kind of a second thought, and I don’t mean this uncharitably. The first goal was making sure that the system worked across all auto manufacturers and parts suppliers, and that’s tricky enough. Security would have to come second. And more modern cars have their CAN networks encrypted now, adding layers of magic on top of magic.

But I’m nearly certain that, when deciding to replace the simple current-sensing test of whether a bulb was burnt out, the engineers probably didn’t have the full cost of moving the bulb onto the CAN bus in mind. They certainly had dreams of simplifying the wiring harness, and of bringing the lowly headlight into the modern age, but I’d bet they had no idea that folks were going to use the headlight port to open the doors. Sufficiently advanced tech.