Welcome To The Future, Where Your Microwave Thinks It’s A Steam Oven

It’s fair to say that many of us will have at some time inadvertently bricked a device by applying the wrong firmware by mistake. If we’re lucky then firing up some low-level reflashing tools can save the day and return the item in question to health, but we’re guessing that among you will be plenty of people who’ve had to discard a PCB or replace an inaccessible microcontroller chip as a result.

Spare a thought then for the consumer appliance manufacturer Electrolux, whose AEG subsidiary has bricked combi microwave ovens acrosss a swathe of Western Europe (Dutch, Google Translate link). They managed this improbable feat by distributing an over-the-air update that contains the firmware for a steam oven instead. Worse still, the update has disabled over-the-air updates, meaning that any fix requires physical access to the oven.

We can’t help sympathising with whichever poor AEG engineer has had the ultimate in bad days at work, but at the same time we should perhaps consider the difference between a computer and an appliance, and whether there should be a need for an oven to phone home in the first place. Sure, such devices have been computer-controlled for decades, but should a microcontroller doing a control task need constant updates?

We’re guessing this oven has some kind of cloud aspect to it which allows AEG to slurp customer data the user to control it via their app, but even so it should serve as a warning to anyone tempted by an internet-connected kitchen appliance. If the internet isn’t necessary for the food to be cooked, don’t connect it.

We feel sorry for anyone who might have put a pizza in the oven just before it was bricked, and watched in disappointment as their tasty meal remained uncooked.

This Week In Security: More Protestware, Another Linux Vuln, And TLStorm

It seems I have made my tiny, indelible mark on internet security history, with the term “protestware“. As far as I can tell, I first coined this particular flavor of malware while covering the Faker.js/Colors.js vandalism in January.

Yet another developer, [RIAEvangelist] has inserted some malicious code (Mirror, since the complaint has been deleted) in an existing project, in protest of something, in this case the war in Ukraine. The behavior here is to write a nice note on the desktop, preaching “peace not war”. However, a few versions of this sample have a nasty surprise — it does a GeoIP lookup, and attempts to wipe the entire drive if it detects a Russian location. Yes, node-ipc versions 10.1.1 and 10.1.2 contain straight-up malware. It’s not clear how many users ran the potentially malicious code, as it was quickly reverted and released 10.1.3. Up-to-date versions of node-ipc still create the desktop file, and Unity Hub has already confirmed they shipped the library in this state, and have since issued a hotfix.
Continue reading “This Week In Security: More Protestware, Another Linux Vuln, And TLStorm”

This Week In Security: DDoS Techniques, Dirty Pipe, And Lapsus$ Continued

Denial-of-Service (DoS) amplification. Relatively early in the history of the Internet — it was only 14 years old at the time — the first DoS amplification attack was discovered. [TFreak] put together smurf.c, likely in 1997, though it’s difficult to nail the date down precisely.

The first real DoS attack had only happened a year before, in 1996. Smurf worked by crafting ICMP packets with spoofed source addresses, and sending those packets to a network’s broadcast address. A host that received the request would send the packet to the target, and if multiple hosts responded, you got a bigger DoS attack for free. Fast forward to 1999, and the first botnet pulled off a Distributed DoS, DDoS, attack. Ever since then, there’s been an ongoing escalation of DDoS traffic size and the capability of mitigations.

DNS and NTP quickly became the popular choice for amplification, with NTP requests managing an amplification factor of 556, meaning that for every byte an attacker sent, the amplifying intermediary would send 556 bytes on to the victim. You may notice that so far, none of the vulnerable services use TCP. The three-way handshake of TCP generally prevents the sort of misdirection needed for an amplified attack. Put simply, you can’t effectively spoof your source address with TCP.

There are a pair of new games in town, with the first being a clever use of “middleboxes”, devices like firewalls, Intrusion Prevention Systems, and content filters. These devices watch traffic and filter content or potential attacks. The key here is that many such devices aren’t actually tracking TCP handshakes, it would be prohibitively memory and CPU intensive. Instead, most such devices just inspect as many packets as they can. This has the unexpected effect of defeating the built-in anti-spoofing of TCP.

An attacker can send a spoofed TCP packet, no handshake required, and a vulnerable middlebox will miss the fact that it’s spoofed. While that’s interesting in itself, what’s really notable is what happens when the packet appears to be a request for a vulnerable or blocked resource. The appliance tries to interrupt the stream, and inject an error message back to the requester. Since the requestor can be spoofed, this allows using these devices as DDoS amplifiers. As some of these services respond to a single packet with what is essentially an entire web page to convey the error, the amplification factor is literally off the charts. This research was published August 2021, and late February of this year, researchers at Akamai have seen DDoS attacks actually using this technique in the wild.

The second new technique is even more alien. Certain Mitel PBXs have a stress-test capability, essentially a speed test on steroids. It’s intended to only be used on an internal network, not an external target, but until a recent firmware update that wasn’t enforced. For nearly 3,000 of these devices, an attacker could send a single packet, and trigger the test against an arbitrary host. This attack, too, has recently been seen in the wild, though in what appears to be test runs. The stress test can last up to 14 hours at worst, leading to a maximum amplification factor if over four billion, measured in packets. The biggest problem is that phone systems like these a generally never touched unless there’s a problem, and there’s a decent chance that no one on site has the login credentials. That is to say, expect these to be vulnerable for a long time to come. Continue reading “This Week In Security: DDoS Techniques, Dirty Pipe, And Lapsus$ Continued”

The Invisible Battlefields Of The Russia-Ukraine War

Early in the morning of February 24th, Dr. Jeffrey Lewis, a professor at California’s Middlebury Institute of International Studies watched Russia’s invasion of Ukraine unfold in realtime with troop movements overlaid atop high-resolution satellite imagery. This wasn’t privileged information — anybody with an internet connection could access it, if they knew where to look. He was watching a traffic jam on Google Maps slowly inch towards and across the Russia-Ukraine border.

As he watched the invasion begin along with the rest of the world, another, less-visible facet of the emerging war was beginning to unfold on an ill-defined online battlefield. Digital espionage, social media and online surveillance have become indispensable instruments in the tool chest of a modern army, and both sides of the conflict have been putting these tools to use. Combined with civilian access to information unlike the world has ever seen before, this promises to be a war like no other.

Modern Cyberwarfare

The first casualties in the online component of the war have been websites. Two weeks ago, before the invasion began en masse, Russian cyberwarfare agents launched distributed denial of service (DDoS) attacks against Ukrainian government and financial websites. Subsequent attacks have temporarily downed the websites of Ukraine’s Security Service, Ministry of Foreign Affairs, and government. A DDoS attack is a relatively straightforward way to quickly take a server offline. A network of internet-connected devices, either owned by the aggressor or infected with malware, floods a target with request, as if millions of users hit “refresh” on the same website at the same time, repeatedly. The goal is to overwhelm the server such that it isn’t able to keep up and stops replying to legitimate requests, like a user trying to access a website. Russia denied involvement with the attacks, but US and UK intelligence services have evidence they believe implicates Moscow. Continue reading “The Invisible Battlefields Of The Russia-Ukraine War”

This Week In Security: Ukraine, Nvidia, And Conti

The geopolitics surrounding the invasion of Ukraine are outside the scope of this column, but the cybersecurity ramifications are certainly fitting fodder. The challenge here is that almost everything of note that has happened in the last week has been initially linked to the conflict, but in several cases, the reported link hasn’t withstood scrutiny. We do know that the Vice Prime Minister of Ukraine put out a call on Twitter for “cyber specialists” to go after a list of Russian businesses and state agencies. Many of the sites on the list did go down for some time, the digital equivalent of tearing down a poster. In response, the largest Russian ISP stopped announcing BGP routes to some of the targeted sites, effectively ending any attacks against them from the outside.

A smattering of similar events have unfolded over the last week, like electric car charging stations in Russia refusing to charge, and displaying a political message, “GLORY TO UKRAINE”. Not all the attacks have been so trivial. Researchers at Eset have identified HermeticWiper, a bit of malware with no other purpose but to destroy data. It has been found on hundreds of high-value targets, likely causing much damage. It is likely the same malware that Microsoft has dubbed FoxBlade, and published details about their response. Continue reading “This Week In Security: Ukraine, Nvidia, And Conti”

Modules described in the article (two copies of the challenge shown, so, two lines of modules)

Spaceship Repair CTF Covers Hardware Hacker Essentials

At even vaguely infosec-related conferences, CTFs are a staple. For KernelCon 2021, [Tyler Rosonke] resolved to create a challenge breaking the traditions, entertaining and teaching people in a different way, while satisfying the constraints of that year’s remote participation plans. His imagination went wild in all the right places, and a beautifully executed multi-step hardware challenge was built – only in two copies!

Story behind the challenge? Your broken spaceship has to be repaired so that you can escape the planet you’re stuck on. The idea was to get a skilled, seasoned hacker solving challenges for our learning and amusement – and that turned out to be none other than [Joe “Kingpin” Grand]!

The modules themselves are what caught our attention. Designed to cover a wide array of hardware hacker skills, they cover soldering, signal sniffing, logic gates, EEPROM dumping and more – and you have to apply all of these successfully for liftoff. If you thought “there’s gotta be a 555 involved”, you weren’t wrong, either, there’s a module where you have to reconfigure a circuit with one!

KernelCon is a volunteer-driven infosec conference in Omaha, and its 2022 installment starts in a month – we can’t wait to see what it brings! Anyone doing hardware CTFs will have something to learn from their stories, it seems. The hacking session, from start to finish, was recorded for our viewing pleasure; linked below as an hour and a half video, it should be a great background for your own evening of reverse-engineering for leisure!

This isn’t the first time we’ve covered [Tyler]’s handiwork, either. In 2020, he programmed a batch of KernelCon badges while employing clothespins as ISP clips. Security conferences have most certainly learned just how much fun you can have with hardware, and if you ever need a case study for that, our review of 2019 CypherCon won’t leave you hanging.

Continue reading “Spaceship Repair CTF Covers Hardware Hacker Essentials”

You Break It, We Fix It

Apple’s AirTags have caused a stir, but for all the wrong reasons. First, they turn all iPhones into Bluetooth LE beacon repeaters, without the owner’s permission. The phones listen for the AirTags, encrypt their location, and send the data on to the iCloud, where the tag’s owner can decrypt the location and track it down. Bad people have figured out that this lets them track their targets without their knowledge, turning all iPhone users into potential accomplices to stalkings, or worse.

Naturally, Apple has tried to respond by implementing some privacy-protecting features. But they’re imperfect to the point of being almost useless. For instance, AirTags now beep once they’ve been out of range of their owner’s phone for a while, which would surely alert the target that they’re being tracked, right? Well, unless the evil-doer took the speaker out, or bought one with the speaker already removed — and there’s a surprising market for these online.

If you want to know that you’re being traced, Apple “innovated with the first-ever proactive system to alert you of unwanted tracking”, which almost helped patch up the problem they created, but it only runs on Apple phones. It’s not clear what they meant by “first-ever” because hackers and researchers from the SeeMoo group at the Technical University of Darmstadt beat them to it by at least four months with the open-source AirGuard project that runs on the other 75% of phones out there.

Along the way, the SeeMoo group also reverse engineered the AirTag system, allowing anything that can send BLE beacons to play along. This opened the door for [Fabian Bräunlein]’s ID-hopping “Find You” attack that breaks all of the tracker-detectors by using an ESP32 instead of an AirTag. His basic point is that most of the privacy guarantees that Apple is trying to make on the “Find My” system rely on criminals using unmodified AirTags, and that’s not very likely.

To be fair, Apple can’t win here. They want to build a tracking network where only the good people do the tracking. But the device can’t tell if you’re looking for your misplaced keys or stalking a swimsuit model. It can’t tell if you’re silencing it because you don’t want it beeping around your dog’s neck while you’re away at work, or because you’ve planted it on a luxury car that you’d like to lift when its owners are away. There’s no technological solution for that fundamental problem.

But hackers are patching up the holes they can, and making the other holes visible, so that we can at least have a reasonable discussion about the tech’s tradeoffs. Apple seems content to have naively opened up a Pandora’s box of privacy violation. Somehow it’s up to us to figure out a way to close it.