Whatsapp allows for end-to-end encrypted messaging, secure VoIP calls, and until this week, malware installation when receiving a call. A maliciously crafted SRTCP connection can trigger a buffer overflow, and execute code on the target device. The vulnerability was apparently found first by a surveillance company, The NSO Group. NSO is known for Pegasus, a commercial spyware program that they’ve marketed to governments and intelligence agencies, and which has been implicated in a number of human rights violations and even the assassination of Jamal Khashoggi. It seems that this Whatsapp vulnerability was one of the infection vectors used by the Pegasus program. After independently discovering the flaw, Facebook pushed a fixed client on Monday.
Windows XP Patched Against Wormable Vulnerability
What year is it!? This Tuesday, Microsoft released a patch for Windows XP, five years after support for the venerable OS officially ended. Reminiscent of the last time Microsoft patched Windows XP, when Wannacry was the crisis. This week, Microsoft patched a Remote Desktop Protocol (RDP) vulnerability, CVE-2019-0708. The vulnerability allows an attacker to connect to the RDP service, send a malicious request, and have control over the system. Since no authentication is required, the vulnerability is considered “wormable”, or exploitable by a self-replicating program.
Windows XP through Windows 7 has the flaw, and fixes were rolled out, though notably not for Windows Vista. It’s been reported that it’s possible to download the patch for Server 2008 and manually apply it to Windows Vista. That said, it’s high time to retire the unsupported systems, or at least disconnect them from the network.
The Worst Vulnerability Name of All Time
Thrangrycat. Or more accurately, “😾😾😾” is a newly announced vulnerability in Cisco products, discovered by Red Balloon Security. Cisco uses secure boot on many of their devices in order to prevent malicious tampering with device firmware. Secure boot is achieved through the use of a secondary processor, a Trust Anchor module (TAm). This module ensures that the rest of the system is running properly signed firmware. The only problem with this scheme is that the dedicated TAm also has firmware, and that firmware can be attacked. The TAm processor is actually an FPGA, and researchers discovered that it was possible to modify the FPGA bitstream, totally defeating the secure boot mechanism.
The name of the attack, thrangrycat, might be a satirical shot at other ridiculous vulnerability names. Naming issues aside, it’s an impressive bit of work, numbered CVE-2019-1649. At the same time, Red Balloon Security disclosed another vulnerability that allowed command injection by an authenticated user.
Odds and Ends
- Google discovered that their Bluetooth Security Keys use Bluetooth, and maybe that’s not a great idea.
- Our own Bob Baddeley finally put the Supermicro server story to bed, probably.
- Fifty year old aviation technology is hackable, but probably won’t be.
See a security story you think we should cover? Drop us a note in the tip jar!
Red Balloon
or
Red Baloon?
99 Luftballons
Thanks, Fixed!
“That said, it’s high time to retire the unsupported systems, or at least disconnect them from the network.”
Bye, bye, Windows 7.
I’ll keep Win7 on my laptop and PC,
but I usually boot them to Linux.
Do note that it’s still a free upgrade to go from 7 to 10. Just download the Windows 10 disk, burn it, and then run the setup exe doing the upgrade. All Windows 7 keys are also valid Windows 10 keys. That said, yes, I also stay in Linux 99% of the time, too.
Hearing Cortana’s smug self-satisfied voice during the Windows 10 setup fills me with rage.
It was bad enough in previous versions of Windows, where they’d show you a slideshow of how Windows will make your computer faster and more productive (LIES, ALL LIES). Now we have to endure this digital fraud assaulting our ears, too? “A little signin here, a little Wifi there, and you’ll be all set”… go to hell, Cortana.
It’s high time that the finders of these security flaws stopped publishing example code. Their research is proving worse than the problems they are resolving as it effectively puts the malicious code out there so that the flaws can be abused.
In this case, there wasn’t any example code published at all. Keep in mind the history of vulnerability research: It’s not uncommon for a researcher to privately disclose a vulnerability, only to be ignored. Vulnerabilities are publicly announced so vendors will actually fix vulnerabilities.
Current best practice is 90 days. We privately tell you about the problem, and warn that in 90 days we’ll make it public. It’s humorous how many companies can’t manage to release a patch until a day or two after 90 days are up.
LMAO! what, so that companies like Cisco can sit on their thumbs and slack on putting out a fix? Im sorry but i disagree with you completely here, the finders should follow responsible disclosure. That means first privately disclosing the vulnerability to the company responsible for the code and giving them the opportunity to fix the issue before public disclosure within 90 days at minimum (i believe in giving a company more time to put out a fix before public disclosure on the more complicated issues). After that the vulnerability should be publicly disclosed with working exploit code, this has two purposes; first, in some legal jurisdictions not disclosing the code publicly could fall under extortion laws and by disclosing the vulnerability they remove the liability of legal recourse (think about researchers going to companies and saying “pay me or i tell people about this bug) and second, this forces the People responsible for I.T. infrastructure to pay attention and update their system because some may hold off if they dont think that the attack is out in the wild and the same goes for the non-technical and financial people who are involved in such discussions..
There is a reason that the saying “there is no security through obscurity” exists.
That Thrangrycat vulnerability is just amazing. All their application firmware must be properly signed to pass their bootloader checks, and the bootloader itself is protected by an FPGA… but the FPGA – the very core of their entire authentication system – loads its code unencrypted from a plain ol’ SPI flash.
If I had to take a wild guess based on my knowledge of businesses, I’d say that this FPGA bitstream vulnerability was originally noticed late in the development process by a firmware developer, who brought it up to a manager, who passed it along to a hardware engineer, who realized that fixing it would require a hardware redesign and they already sent the board files out to be manufactured. So they all shrugged and said “well, it’ll probably be OK” and here we are.
Well, for that one needs physical access to the equipment, no ? So it makes things a little harder to happen. Maybe they just dismissed it as “not their problem”.
No, code execution is enough. This is not that unlikely, there have been many Cisco remote exploits.
Unfortunately, no. The SPI flash can be modified by anyone with root privileges, so this attack can be done remotely by combining it with a known remote injection attack.
It’s particularly amazing when purpose-built microcontrollers to do the job aren’t exactly new(TPMs were only standardized a decade ago; but pre-ISO chips were shipping in some quantity in 2006 if not earlier; and obviously Cisco would have less of a need for standardization when they control both the hardware and the software and so could customize for a specific device’s quirks); and Cisco pumps out (often not exactly cheap) hardware in nontrivial quantity.
Seems like a situation where you either piggyback on TPMs; or get the design in your FPGA fabricated(unless it needs to verify the bootloader constantly or vet memory access or the like you can probably get away with a fairly low performance design and process, a few extra seconds of boot time won’t kill anyone) and unflashable without decapping and precise violence.
Explicitly reprogrammabled hardware as your hardware root of trust?
I have no desire to put Windows 10 on my machine. For what I do (VB programming, Star Trek Online, Train Simulator and Apple 6502) I don’t need the bloat of 10. I don’t need tiles, or thingamajigs filling up my screen vying for my attention.
A simple Start menu is fine. If I want the mail program I can load it myself, no need to have it load in the background and
show me a tile to remind me it’s there if I instantly need it. Windows XP was great but 7 was the best OS from Microsoft.
I agree – everything past win 7 went too far up the bloatware curve. I compare my 3 year old main PC (win 7) with a very fast one my son is running that is 3 months old (and cost twice what mine did) running win 10 – and he keeps on asking why mine is much snappier…
Win 7 runs very very well on any vaguely modern hardware – if I ‘upgraded’ to win 10 I’d need a new PC and it would still be slower…
Windows XP was widely deployed in embedded systems that cost hundreds of thousands of dollars, and that people should reasonably expect to still work.
I don’t think it’s unreasonable to expect Microsoft to continue releasing patches for glaring issues like this. Especially if they want vendors to choose Windows 10 as an embedded platform.
IMHO
Publishing code to exploit security vulnerabilities is just plain wrong! It’s in the same ilk as publishing how to make a bomb.
It’s encouraging abuse and should be regarded as such. It facilitates abuse. And aiding and abetting a crime is a criminal offence and as such they should be prosecuted.
If those that publish are being paid for their findings to prevent publication, then that is extortion.
I am in IT. We spend a huge amount of time trying to decide whether to deploy bug fixes. We have to test them first. There is no guarantee that the fix won’t be worse with new unrelated bugs introduced. W10 1809 is a classic example of horrific bugs.
In reality, many of these obscure bugs would never have been found by hackers if these professional organisations (professional if you get paid for your findings, academic or not) had not been wasting their time in “”academia” studying = looking for vulnerabilities.
I believe these organisations have a lot to answer for in the disclosure and hence criminal use of their disclosures.
We are worse off for their discoveries, not better off !!!
Are you new here?
That’s pretty naive thinking; without responsible vulnerability disclosures, you’re opening the door to zero day attacks.
Nefarious actors will research vulnerabilities regardless of moral considerations, and giving patchers the time to provide a fix is never a thought on their mind. ETERNALBLUE, for example, caused an estimate of up to $4 billion worth of damage via the WanaCry ransomware attack. Even assuming a conservative approach of 10% patch rate, $400 million would have been saved if Microsoft was alerted by the NSA or Shadow Brokers of the vulnerabilities and given the customary 90 days to push a fix (effectively reducing the bug to an n-day). Even if Microsoft didn’t patch the bug in the time allotted, they at least had knowledge of the actual bug. Instead, Microsoft had to work double duty to determine the cause _and_ solution to the bug.
All of this doesn’t even account for the actual machines involved; some of these systems are a part of a network of safety critical/medical devices, meaning every _minute_ the company knows of the vulnerability, the more potential lives are saved.
Responsible disclosure saves time, money, and lives. Don’t reduce it to criminal activity.