This Week In Security: Good Faith, Easy Forgery, And I18N

There’s a danger in security research that we’ve discussed a few times before. If you discover a security vulnerability on a production system, and there’s no bug bounty, you’ve likely broken a handful of computer laws. Turn over the flaw you’ve found, and you’re most likely to get a “thank you”, but there’s a tiny chance that you’ll get charged for a computer crime instead. Security research in the US is just a little safer now, as the US Department of Justice has issued a new policy stating that “good-faith security research should not be charged.”

While this is a welcome infection of good sense, it would be even better for such a protection to be codified into law. The other caveat is that this policy only applies to federal cases in the US. Other nations, or even individual states, are free to bring charges. So while this is good news, continue to be careful. There are also some caveats about what counts as good-faith — If a researcher uses a flaw discovery to extort, it’s not good-faith.
Continue reading “This Week In Security: Good Faith, Easy Forgery, And I18N”

TurtleAuth DIY Security Token Gets (Re)designed For Durable, Everyday Use

[Samuel]’s first foray into making DIY hardware authentication tokens was a great success, but he soon realized that a device intended for everyday carry and use has a few different problems to solve, compared to a PCB that lives and works on a workbench. This led to TurtleAuth 2.1, redesigned for everyday use and lucky for us all, he goes into detail on all the challenges and solutions he faced.

When we covered the original TurtleAuth DIY security token, everything worked fantastically. However, the PCB layout had a few issues that became apparent after a year or so of daily use. Rather than 3D print an enclosure and call it done, [Samuel] decided to try a different idea and craft an enclosure from the PCB layers themselves.

The three-layered PCB sandwich keeps components sealed away and protected, while also providing a nice big touch-sensitive pad on the top, flanked by status LEDs. Space was a real constraint, and required a PCB redesign as well as moving to 0402 sized components, but in the end he made it work. As for being able to see the LEDs while not having any component exposed? No problem there; [Samuel] simply filled in the holes over the status LEDs with some hot glue, creating a cheap, effective, and highly durable diffuser that also sealed away the internals.

Making enclosures from PCB material can really hit the spot, and there’s no need to re-invent the wheel when it comes to doing so. Our own [Voja Antonic] laid out everything one needs to know about how to build functional and beautiful enclosures in this way.

This Week In Security: IPhone Unpowered, Python Unsandboxed, And Wizard Spider Unmasked

As conspiracy theories go, one of the more plausible is that a cell phone could be running malicious firmware on its baseband processor, and be listening and transmitting data even when powered off. Nowadays, this sort of behavior is called a feature, at least if your phone is made by Apple, with their Find My functionality. Even with the phone off, the Bluetooth chip runs happily in a low-power state, making these features work. The problem is that this chip doesn’t do signed firmware. All it takes is root-level access to the phone’s primary OS to load a potentially malicious firmware image to the Bluetooth chip.

Researchers at TU Darmstadt in Germany demonstrated the approach, writing up a great paper on their work (PDF). There are a few really interesting possibilities this research suggests. The simplest is hijacking Apple’s Find My system to track someone with a powered down phone. The greater danger is that this could be used to keep surveillance malware on a device even through power cycles. Devices tend to be secured reasonably well against attacks from the outside network, and hardly at all from attacks originating on the chips themselves. Unfortunately, since unsigned firmware is a hardware limitation, a security update can’t do much to mitigate this, other than the normal efforts to prevent attackers compromising the OS.
Continue reading “This Week In Security: IPhone Unpowered, Python Unsandboxed, And Wizard Spider Unmasked”

This Week In Security: F5 Twitter PoC, Certifried, And Cloudflare Pages Pwned

F5’s BIG-IP platform has a Remote Code Execution (RCE) vulnerability: CVE-2022-1388. This one is interesting, because a Proof of Concept (PoC) was quickly reverse engineered from the patch and released on Twitter, among other places.

HORIZON3.ai researcher [James Horseman] wrote an explainer that sums up the issue nicely. User authentication is handled by multiple layers, one being a Pluggable Authentication Modules (PAM) module, and the other internally in a Java class. In practice this means that if the PAM module sees an X-F5-Auth-Token, it passes the request on to the Java code, which then validates the token to confirm it as authentic. If a request arrives at the Java service without this header, and instead the X-Forwarded-Host header is set to localhost, the request is accepted without authentication. The F5 authentication scheme isn’t naive, and a request without the X-F5-Auth-Token header gets checked by PAM, and dropped if the authentication doesn’t check out.

So where is the wiggle room that allows for a bypass? Yet another HTTP header, the Connection header. Normally this one only comes in two varieties, Connection: close and Connection: keep-alive. Really, this header is a hint describing the connection between the client and the edge proxy, and the contents of the Connection header is the list of other headers to be removed by a proxy. It’s essentially the list of headers that only apply to the connection over the internet. Continue reading “This Week In Security: F5 Twitter PoC, Certifried, And Cloudflare Pages Pwned”

This Week In Security: UClibc And DNS Poisoning, Encryption Is Hard, And The Goat

DNS spoofing/poisoning is the attack discovered by [Dan Kaminski] back in 2008 that simply refuses to go away. This week a vulnerability was announced in the uClibc and uClibc-ng standard libraries, making a DNS poisoning attack practical once again.

So for a quick refresher, DNS lookups generally happen over unencrypted UDP connections, and UDP is a stateless connection, making it easier to spoof. DNS originally just used a 16-bit transaction ID (TXID) to validate DNS responses, but [Kaminski] realized that wasn’t sufficient when combined with a technique that generated massive amounts of DNS traffic. That attack could poison the DNS records cached by public DNS servers, greatly amplifying the effect. The solution was to randomize the UDP source port used when sending UDP requests, making it much harder to “win the lottery” with a spoofed packet, because both the TXID and source port would have to match for the spoof to work.

uClibc and uClibc-ng are miniature implementations of the C standard library, intended for embedded systems. One of the things this standard library provides is a DNS lookup function, and this function has some odd behavior. When generating DNS requests, the TXID is incremental — it’s predictable and not randomized. Additionally, the TXID will periodically reset back to it’s initial value, so not even the entire 16-bit key space is exercised. Not great. Continue reading “This Week In Security: UClibc And DNS Poisoning, Encryption Is Hard, And The Goat”

Audio Eavesdropping Exploit Might Make That Clicky Keyboard Less Cool

Despite their claims of innocence, we all know that the big tech firms are listening to us. How else to explain the sudden appearance of ads related to something we’ve only ever spoken about, seemingly in private but always in range of a phone or smart speaker? And don’t give us any of that fancy “confirmation bias” talk — we all know what’s really going on.

And now, to make matters worse, it turns out that just listening to your keyboard clicks could be enough to decode what’s being typed. To be clear, [Georgi Gerganov]’s “KeyTap3” exploit does not use any of the usual RF-based methods we’ve seen for exfiltrating data from keyboards on air-gapped machines. Rather, it uses just a standard microphone to capture audio while typing, building a cluster map of the clicks with similar sounds. By analyzing the clusters against the statistical likelihood of certain sequences of characters appearing together — the algorithm currently assumes standard English, and works best on clicky mechanical keyboards — a reasonable approximation of the original keypresses can be reconstructed.

If you’d like to see it in action, check out the video below, which shows the algorithm doing a pretty good job decoding text typed on an unplugged keyboard. Or, try it yourself — the link above implements KeyTap3 in-browser. We gave it a shot, but as a member of the non-mechanical keyboard underclass, it couldn’t make sense of the mushy sounds it heard. Then again, our keyboard inferiority affords us some level of protection from the exploit, so there’s that.

Editors Note: Just tried it on a mechanical keyboard with Cherry MX Blue switches and it couldn’t make heads or tails of what was typed, so your mileage may vary. Let us know if it worked for you in the comments.

What strikes us about this is that it would be super simple to deploy an exploit like this. Most side-channel attacks require such a contrived scenario for installing the exploit that just breaking in and stealing the computer would be easier. All KeyTap needs is a covert audio recording, and the deed is done.

Continue reading “Audio Eavesdropping Exploit Might Make That Clicky Keyboard Less Cool”

The TPM module that Viktor designed, inserted into the motherboard

TPM Module Too Expensive? DIY Your Own Easily!

Since Windows 11 has announced its TPM module requirement, the prices for previously abundant and underappreciated TPM add-on boards for PC motherboards have skyrocketed. We’ve been getting chips and soldering them onto boards of our own design, instead – and [viktor]’s project is one more example of that. [Viktor] has checked online marketplace listings for a TPM module for his Gigabyte AORUS GAMING 3 motherboard, and found out they started at around 150EUR – which is almost as much as the motherboard itself costs. So, as any self-respecting hacker, he went the DIY way, and it went with hardly a hitch.

Following the schematic from the datasheet, he quickly made a simple KiCad layout, matching it to the pinout from his motherboard’s user manual, then ordered the boards from PCBWay and SLB9665 chips from eBay. After both arrived, [viktor] assembled the boards, and found one small mistake – he designed a module for 2.54mm pin headers, but his motherboard had 2.0mm headers. He wired up a small adapter to make his assembled V1.0 boards work, and Windows 11 installed without any TPM complaints. He shows that he’s designed a new, V1.1 version with an updated connector, too, and published its (untested but should work) design files for us on GitHub. These modules can vary, by manufacturer and motherboard series, but with each module published, a bunch of hackers can save money – and get a weekend project virtually guaranteed to work out.

Regardless of whether the goal of running Windows 11 is ultimately worthwhile, it has been achieved. With scalpers preying on people who just want to use their hardware with a new OS, rolling your own TPM PCB is a very attractive solution! Last time we covered a DIY TPM module for ASrock server motherboards, we had a vivid discussion in the comments, and if you’re looking to create your own TPM board, you could do worse than checking them out for advice and insights!