An Automotive Locksmith On The Flipper Zero And Car Theft

Here in the hacker community there’s nothing we love more than a clueless politician making a fool of themselves sounding off about a technology they know nothing about. A few days ago we were rewarded in spades by the Canadian Minister of Innovation, Science and Industry François-Philippe Champagne, who railed against the Flipper Zero, promising to ban it as a tool that could be used to gain keyless entry to a vehicle.

Of course our community has roundly debunked this assertion, as capable though the Flipper is, the car industry’s keyless entry security measures are many steps ahead of it. We’ve covered the story from a different angle before, but it’s worth returning to it for an automotive locksmith’s view on the matter from [Surlydirtbag].

He immediately debunks the idea of the Flipper being used for keyless entry systems, pointing out that thieves have been using RF relay based attacks which access the real key for that task for many years now. He goes on to address another concern, that the Flipper could be used to clone the RFID chip of a car key, and concludes that it can in the case of some very old vehicles whose immobilizers used simple versions of the technology, but not on anything recent enough to interest a car thief.

Of course, to many readers this will not exactly be news. But it’s still important, because perhaps some of us will have had to discuss this story with non-technical people who might be inclined to believe such scare stories. Being able to say “Don’t take it from me, take it from an automotive locksmith” might just help. Meanwhile there is still the concern of CAN bus attacks to contend with, something the manufacturers could have headed off had they only separated their on-board subsystems.

Continue reading “An Automotive Locksmith On The Flipper Zero And Car Theft”

Big Candy Is Watching You: Facial Recognition In Vending Machines Upsets University

Most people don’t think too much of vending machines. They’re just those hulking machines that lurk around on train stations, airports and in the bowels of school and office buildings, where you can exchange far too much money for a drink or a snack. What few people are aware of is just how these vending machines have changed over the decades, to the point where they’re now collecting any shred of information on who interacts with them, down to their age and gender.

How do we know this? We have a few enterprising students at the University of Waterloo to thank. After [SquidKid47] posted a troubling error message displayed by a campus M&M vending machine on Reddit, [River Stanley] decided to investigate the situation. The resulting article was published in the February 16th edition of the university’s digital newspaper, mathNEWS.

In a bout of what the publication refers to as “Actual Journalism”, [Stanley] found that the machine in question was produced by Invenda, who in their brochure (PDF) excitedly note the many ways in which statistics like age, gender, foot traffic, session time and product demographics can be collected. This data, which includes the feed from an always-on camera, is then processed and ‘anonymized statistics’ are sent to central servers for perusal by the vending machine owner.

The good news is that this probably doesn’t mean that facial recognition and similar personalized information is stored (or sent to the big vaporous mainframe) as this would violate the GDPR  and similar data privacy laws, but there is precedence of information kiosks at a mall operator taking more liberties. Although the University of Waterloo has said that these particular vending machines will be removed, there’s something uncomfortable about knowing that those previously benign vending machines are now increasingly more like the telescreens in Orwell’s Nineteen Eighty-Four. Perhaps we’re already at the point in this timeline were it’s best to assume that even vending machines are always watching and listening, to learn our most intimate snacking and drinking habits.

Thanks to [Albert Hall] for the tip.

This Week In Security: Wyze, ScreenConnect, And Untrustworthy Job Postings

For a smart home company with an emphasis on cloud-connected cameras, what could possibly be worse than accidentally showing active cameras to the wrong users? Doing it again, to far more users, less than 6 months after the previous incident.

The setup for this breach was an AWS problem, that caused a Wyze system outage last Friday morning. As the system was restored, the load spiked and a caching library took the brunt of the unintentional DDoS. This library apparently has a fail state of serving images and videos to the wrong users. An official report from Wyze mentions that this library had been recently added, and that the number of thumbnails shown to unauthorized users was around 13,000. Eek. There’s a reason we recommend picking one of the Open Source NVR systems here at Hackaday.

ScreenConnect Exploit in the Wild

A pair of vulnerabilities in ConnectWise ScreenConnect were announced this week, Proof of Concepts were released, and are already being used in active exploitation. The vulnerabilities are a CVSS 10.0 authentication bypass and a CVSS 8.4 path traversal bypass.

Huntress has a guide out, detailing how embarrassingly easy the vulnerabilities are to exploit. The authentication bypass is a result of a .Net quirk, that adding an additional directory on the end of a .aspx URL doesn’t actually change the destination, but is captured as PathInfo. This allows a bypass of the protections against re-running the initial setup wizard: hostname/SetupWizard.aspx/literallyanything

The second vulnerability triggers during extension unpack, as the unzipping process doesn’t prevent path traversal. The most interesting part is that the unzip happens before the extension installation finishes. So an attacker can compromise the box, cancel the install, and leave very little trace of exploitation. Continue reading “This Week In Security: Wyze, ScreenConnect, And Untrustworthy Job Postings”

Your Noisy Fingerprints Vulnerable To New Side-Channel Attack

Here’s a warning we never thought we’d have to give: when you’re in an audio or video call on your phone, avoid the temptation to doomscroll or use an app that requires a lot of swiping. Doing so just might save you from getting your identity stolen through the most improbable vector imaginable — by listening to the sound your fingerprints make on the phone’s screen (PDF).

Now, we love a good side-channel attack as much as anyone, and we’ve covered a lot of them over the years. But things like exfiltrating data by blinking hard drive lights or turning GPUs into radio transmitters always seemed a little far-fetched to be the basis of a field-practical exploit. But PrintListener, as [Man Zhou] et al dub their experimental system, seems much more feasible, even if it requires a ton of complex math and some AI help. At the heart of the attack are the nearly imperceptible sounds caused by friction between a user’s fingerprints and the glass screen on the phone. These sounds are recorded along with whatever else is going on at the time, such as a video conference or an online gaming session. The recordings are preprocessed to remove background noise and subjected to spectral analysis, which is sensitive enough to detect the whorls, loops, and arches of the unsuspecting user’s finger.

Once fingerprint patterns have been extracted, they’re used to synthesize a set of five similar fingerprints using MasterPrint, a generative adversarial network (GAN). MasterPrint can generate fingerprints that can unlock phones all by itself, but seeding the process with patterns from a specific user increases the odds of success. The researchers claim they can defeat Automatic Fingerprint Identification System (AFIS) readers between 9% and 30% of the time using PrintListener — not fabulous performance, but still pretty scary given how new this is.

This Week In Security: Filename Not Sanitized, MonikerLink, And Snap Attack!

Reading through a vulnerability report about ClamAV, I came across a phrase that filled me with dread: “The file name is not sanitized”. It’s a feature, VirusEvent, that can be enabled in the ClamnAV config. And that configuration includes a string formatting function, where the string includes %v and %s, which gets replaced with a detected virus name and the file name from the email. And now you see the problem, I hope: The filename is attacker supplied input.

Where this really gets out of hand is what ClamAV does with this string. execle("/bin/sh", "sh", "-c", buffer_cmd, NULL, env). So let’s talk defensive program design for a minute. When it comes to running a secondary command, there are two general options, system() and the exec*() family of system calls. system() is very simple to use. It pauses execution of the main process and asks the operating system to run a string, just as if the user had typed that command into the shell. While this is very convenient to use, there is a security problem if any of that command string is user-supplied. All it takes is a semicolon or ampersand to break assumptions and inject a command.

To the rescue comes exec(). It’s a bit more complicated to use, requiring the programmer to manually call fork() and wait(). But it’s not running the command via the shell. exec() executes a program directly, totally eliminating the potential for command injection! Except… oops.

Yeah, exec() and related calls don’t offer any security protections when you use them to execute /bin/sh. I suspect the code was written this way to allow running a script without specifying /bin/sh in the config. The official fix was to disable the filename format character, and instead supply it as an environment variable. That certainly works, and that fix is available in 1.0.5, 1.2.2, and 1.3.0.

The real danger here is that we have another case where some hardware appliance manufacturer has used ClamAV for email filtering, and uses this configuration by default. That’s how we get orders from CISA to unplug your hardware, because it’s already compromised. Continue reading “This Week In Security: Filename Not Sanitized, MonikerLink, And Snap Attack!”

Why Stealing A Car With Flipper Zero Is A Silly Idea

In another regular installment of politicians making ridiculous statements about technology, Canada’s Minister of Innovation, Science and Industry, [François-Philippe Champagne], suggested banning Flipper Zero and similar devices from sale in the country, while accusing them of being used for ‘stealing cars’ and similar. This didn’t sit right with [Peter Fairlie] who put together a comprehensive overview video of how car thieves really steal cars. Perhaps unsurprisingly, the main method is CAN bus injection, for which a Flipper Zero is actually a terribly clumsy device. Rather you’d use a custom piece of kit that automates the process.

You can also find these devices being sold all over the internet as so-called ‘Emergency Start’ devices for sale all over the internet, all of which use weaknesses in the car’s CAN bus network. The common problem appears to be that with these days even the lights on the car being part of the CAN network, an attacker can gain access for injection purposes. This way no key fob is needed, and the ignition system can be triggered with the usual safeties and lockouts being circumvented.

Ultimately, although the Flipper Zero is a rather cutesy toy, it doesn’t do anything that cannot be done cheaper and more effectively by anyone with a bit of CAN bus knowledge and a disregard for the law.

Thanks to [Stephen Walters] for the tip.

Continue reading “Why Stealing A Car With Flipper Zero Is A Silly Idea”

This Week In Security: Broken Shims, LassPass, And Toothbrushes?

Linux has a shim problem. Which naturally leads to a reasonable question: What’s a shim, and why do we need it? The answer: Making Linux work wit Secure Boot, and an unintended quirk of the GPLv3.

Secure Boot is the verification scheme in modern machines that guarantees that only a trusted OS can boot. When Secure Boot was first introduced, many Linux fans suggested it was little more than an attempt to keep Linux distros off of consumer’s machines. That fear seems to have been unwarranted, as Microsoft has dutifully kept the Linux Shim signed, so we can all run Linux distros on our Secure Boot machines.

So the shim. It’s essentially a first-stage bootloader, that can boot a signed GRUB2 or other target. You might ask, why can’t we just ask Microsoft to sign GRUB2 directly? And that’s where the GPLv3 comes in. That license has an “anti-tivoization” section, which specifies “Installation Information” as part of what must be provided as part of GPLv3 compliance. And Microsoft’s legal team understands that requirement to apply to even this signing process. And it would totally defeat the point of Secure Boot to release the keys, so no GPLv3 code gets signed. Instead, we get the shim.

Now that we understand the shim, let’s cover how it’s broken. The most serious vulnerability is a buffer overflow in the HTTP file transfer code. The buffer is allocated based on the size in the HTTP header, but a malicious HTTP server can set that value incorrectly, and the shim code would happily write the real HTTP contents past the end of that buffer, leading to arbitrary code execution. You might ask, why in the world does the shim have HTTP code in it at all? The simple answer is to support UEFI HTTP Boot, a replacement for PXE boot.

The good news is that this vulnerability can only be triggered when using HTTP boot, and only by connecting to a malicious server or via a man-in-the-middle attack. With this in mind, it’s odd that this vulnerability is rated a 9.8. Specifically, it seems incorrect that this bug is rated low complexity, or a general network attack vector. In Red Hat’s own write-up of the vulnerability, they argue that the exploitation is high complexity, and is only possible from an adjacent network. There were a handful of lesser vulnerabilities found, and these were all fixed with shim 15.8. Continue reading “This Week In Security: Broken Shims, LassPass, And Toothbrushes?”