One of the pleasures of consuming old science fiction movies and novels is that they capture the mood of the time in which they are written. Captain Kirk was a 1960s guy and Picard was a 1990s guy, after all. Cold war science fiction often dealt with invasion. In the 1960s and 70s, you were afraid of losing your job to a computer, so science fiction often had morality tales of robots running amok, reminding us what a bad idea it was to give robots too much power. As it turns out, robots might be dangerous, but not for the reasons we thought. The robots won’t turn on us by themselves. But they could be hacked. To that end, there’s a growing interest in robot cybersecurity and Alias Robotics is releasing Alurity, a toolbox for robot cybersecurity.
Currently, the toolbox is available for Linux and MacOS with some support for Windows. It targets 25 base robots including the usual suspects. There’s a white paper from when the product entered testing available if you want more technical details.
Despite the popularity of social media, for communication that actually matters, e-mail reigns supreme. Crucial to the smooth operation of businesses worldwide, it’s prized for its reliability. Google is one of the world’s largest e-mail providers, both with its consumer-targeted Gmail product as well as G Suite for business customers [Jeffrey Paul] is a user of the latter, and was surprised to find that URLs in incoming emails were being modified by the service when fetched via the Internet Message Access Protocol (IMAP) used by external email readers.
This change appears to make it impossible for IMAP users to see the original email without logging into the web interface, it breaks verification of the cryptographic signatures, and it came as a surprise.
Security Matters
A test email sent to verify the edits made by Google’s servers. Top, the original email, bottom, what was received.
For a subset of users, it appears Google is modifying URLs in the body of emails to instead go through their own link-checking and redirect service. This involves actually editing the body of the email before it reaches the user. This means that even those using external clients to fetch email over IMAP are affected, with no way to access the original raw email they were sent.
The security implications are serious enough that many doubted the initial story, suspecting that the editing was only happening within the Gmail app or through the web client. However, a source claiming to work for Google confirmed that the new feature is being rolled out to G Suite customers, and can be switched off if so desired. Reaching out to Google for comment, we were directed to their help page on the topic.
The stated aim is to prevent phishing, with Google’s redirect service including a link checker to warn users who are traveling to potentially dangerous sites. For many though, this explanation doesn’t pass muster. Forcing users to head to a Google server to view the original URL they were sent is to many an egregious breach of privacy, and a security concern to boot. It allows the search giant to further extend its tendrils of click tracking into even private email conversations. For some, the implications are worse. Cryptographically signed messages, such as those using PGP or GPG, are broken by the tool; as the content of the email body is modified in the process, the message no longer checks out with respect to the original signature. Of course, this is the value of signing your messages — it becomes much easier to detect such alterations between what was sent and what was received.
Inadequate Disclosure
Understandably, many were up in arms that the company would implement such a measure with no consultation or warning ahead of time. The content of an email is sacrosanct, in many respects, and tampering with it in any form will always be condemned by the security conscious. If the feature is a choice for the user, and can be turned off at will, then it’s a useful tool for those that want it. But this discovery was a surprise to many, making it hard to believe it was adequately disclosed before roll-out. The question unfolded in the FAQ screenshot above hints at this being part of Google’s A/B test and not applied to all accounts. Features being tested on your email account should be disclosed yet they are not.
Protecting innocent users against phishing attacks is a laudable aim, and we can imagine many business owners enabling such a feature to avoid phishing attacks. It’s another case where privacy is willingly traded for the idea of security. While the uproar is limited due to the specific nature of the implementation thus far, we would expect further desertion of Google’s email services by the tech savvy if such practices were to spread to the mainstream Gmail product. Regardless of what happens next, it’s important to remember that the email you read may not be the one you were sent, and act accordingly.
Update 30/10/2020: It has since come to light that for G Suite users with Advanced Protection enabled, it may not be possible to disable this feature at all.
This week, the first details of BleedingTooth leaked onto Twitter, setting off a bit of a frenzy. The full details have yet to be released, but what we know is concerning enough. First off, BleedingTooth isn’t a single vulnerability, but is a set of at least 3 different CVEs (Shouldn’t that make it BleedingTeeth?). The worst vulnerability so far is CVE-2020-12351, which appears to be shown off in the video embedded after the break.
Most standardized tests have a fee: the SAT costs $50, the GRE costs $200, and the NY Bar Exam costs $250. This year, the bar exam came at a much larger cost for recent law school graduates — their privacy.
Many in-person events have had to find ways to move to the internet this year, and exams are no exception. We’d like to think that online exams shouldn’t be a big deal. It’s 2020. We have a pretty good grasp on how security and privacy should work, and it shouldn’t be too hard to implement sensible anti-cheating features.
It shouldn’t be a big deal, but for one software firm, it really is.
The NY State Board of Law Examiners (NY BOLE), along with several other state exam boards, chose to administer this year’s bar exam via ExamSoft’s Examplify. If you’ve missed out on the Examplify Saga, following the Diploma Privilege for New York account on Twitter will get you caught up pretty quickly. Essentially, according to its users, Examplify is an unmitigated disaster. Let’s start with something that should have been settled twenty years ago.
GitHub has enabled free code analysis on public repositories. This is the fruit of the purchase of Semmle, almost exactly one year ago. Anyone with write permissions to a repository can go into the settings, and enable scanning. Beyond the obvious use case of finding vulnerabilities, an exciting option is to automatically analyse pull requests and flag potential security problems automatically. I definitely look forward to seeing this tool in action.
The Code Scanning option is under the Security tab, and the process to enable it only takes a few seconds. I flipped the switch on one of my repos, and it found a handful of issues that are worth looking in to. An important note, anyone can run the tool on a forked repo and see the results. If CodeQL finds an issue, it’s essentially publicly available for anyone who cares to look for it.
Simpler Code Scanning
On the extreme other hand, [Will Butler] wrote a guide to searching for exploits using grep. A simple example, if raw shows up in code, it often signals an unsafe operation. The terms fixme or todo, often in comments, can signal a known security problem that has yet to be fixed. Another example is unsafe, which is an actual keyword in some languages, like Rust. If a Rust project is going to have vulnerabilities, they will likely be in an unsafe block. There are some other language-dependent pointers, and other good tips, so check it out.
[Bertrand Fan] is not a fan of the tiny, hard-to-actuate button on the average Yubikey. Before all that is 2020 occurred, [Bert] had the little 2FA nano-donglette plugged into a spare USB port on the side of their laptop so that it was always available wherever the laptop traveled. Now that working from home is the norm, [Bert] has the laptop off to the side, far out of reach.
It runs on a Wemos D1 mini and uses a small stepper motor to push a 3D-printed finger along a rack-and-pinion actuator. Since the Yubikey requires capacitive touch, [Bert] added a screw to the finger tip that’s wired to ground. Now all [Bert] has to do is press a decidedly cooler key to make the finger press the button for him. Check out a brief demo after the break.
Ah, the ever-present PDF, and our love-hate relationship with the format. We’ve lost count of how many vulnerabilities have been fixed in PDF software, but it’s been a bunch over the years. This week, we’re reminded that Adobe isn’t the only player in PDF-land, as Foxit released a round of updates, and there were a couple serious problems fixed. Among the vulnerabilities, a handful could lead to RCE, so if you use or support Foxit users, be sure to go get them updated.
PunkBuster
Remember PunkBuster? It’s one of the original anti-cheat solutions, from way back in 2000. The now-classic Return to Castle Wolfenstein was the first game to support PunkBuster to prevent cheating. It’s not the latest or greatest, but PunkBuster is still running on a bunch of game servers even today. [Daniel Prizmant] and [Mauricio Sandt] decided to do a deep dive project on PunkBuster, and happened to find an arbitrary file-write vulnerability, that could easily compromise a PB enabled server.
One of the functions of PunkBuster is a remote screenshot capture. If a server admin thinks a player is behaving strangely, a screenshot request is sent. I assume this targets so-called wallhack cheats — making textures transparent, so the player can see through walls. The problem is that the server logic that handles the incoming image has a loophole. If the filename ends in .png as expected, some traversal attack checks are done, and the png file is saved to the server. However, if the incoming file isn’t a png, no transversal detection is done, and the file is naively written to disk. This weakness, combined with the stateless nature of screenshot requests, means that any connected client can write any file to any location on the server at any time. To their credit, even Balance, the creators of PunkBuster, quickly acknowledged the issue, and have released an update to fix it.