Mini RC Helicopter Becomes Even Smaller Submarine

We often think of submarines as fairly complex pieces of machinery, and for good reason. Keeping the electronics watertight can naturally be quite difficult, and maintaining neutral buoyancy while traveling underwater is a considerable engineering challenge. But it turns out that if you’re willing to skip out on those fairly key elements of submarine design, the whole thing suddenly becomes a lot easier. Big surprise, right?

That’s precisely how [Peter Sripol] approached his latest project, which he’s claiming is the world’s smallest remote control submarine. We’re not qualified to say if that’s true or not, but we were certainly interested in seeing how he built the diminutive submersible. Thanks to the fact that it started life as one of those cheap infrared helicopters, it’s actually a fairly approachable project if you’re looking to make one yourself.

The larger prototype version is also very cool.

After testing that the IR communication would actually work as expected underwater, [Peter] liberated the motors and electronics from the helicopter. The motor’s wires were shortened, and the receiver PCB got a slathering of epoxy to try and keep the worst of the water out, but otherwise they were unmodified.

If you’re wondering how the ballast system works, there isn’t one. The 3D printed body angles the motors slightly downwards, so when the submarine is moving forward it’s also being pulled deeper into the water. There aren’t any control surfaces either, differential thrust between the two motors is used to turn left and right. This doesn’t make for a particularly nimble craft, but in the video after the break it certainly looks like they’re having fun with it.

Looking for a slightly more complex 3D printed submersible vehicle? Don’t worry, we’ve got you covered.

Continue reading “Mini RC Helicopter Becomes Even Smaller Submarine”

This Week In Security: Let’s Encrypt Revocation, Ghostcat, And The RIDLer

Let’s Encrypt recently celebrated their one billionth certificate. That’s over 190 million websites currently secured, and thirteen full-time staff. The annual budget for Lets Encrypt is an eye-watering $3.3+ million, covered by sponsors like Mozilla, Google, Facebook, and the EFF.

A cynic might ask if we need to rewind the counter by the three million certificates Let’s Encrypt recently announced they are revoking as a result of a temporary security bug. That bug was in the handling of the Certificate Authority Authorization (CAA) security extension. CAA is a recent addition to the X.509 standard. A domain owner opts in by setting a CAA field in their DNS records, specifying a particular CA that is authorized to issue certificates for their domain. It’s absolutely required that when a CA issues a new certificate, it checks for a CAA record, and must refuse to issue the certificate if a different authority is listed in the CAA record.

The CAA specification specifies eight hours as the maximum time to cache the CAA check. Let’s Encrypt uses a similar automated process to determine domain ownership, and considers those results to be valid for 30 days. There is a corner case where the Let’s Encrypt validation is still valid, but the CAA check needs to be re-performed. For certificates that cover multiple domains, that check would need to be performed for each domain before the certificate can be issued. Rather validating each domain’s CAA record, the Let’s Encrypt validation system was checking one of those domain names multiple times. The problem was caught and fixed on the 28th.

The original announcement gave administrators 36 hours to manually renew their affected certificates. While just over half of the three million target certificates have been revoked, an additional grace period has been extended for the over a million certs that are still in use. Just to be clear, there aren’t over a million bad certificates in the wild, and in fact, only 445 certificates were minted that should have been prevented by a proper CAA check.

Ghostcat

Apache Tomcat, the open source Java-based HTTP server, has had a vulnerability for something like 13 years. AJP, the Apache JServ Protocol, is a binary protocol designed for server-to-server communication. An example use case would be an Apache HTTP server running on the same host as Tomcat. Apache would serve static files, and use AJP to proxy dynamic requests to the Tomcat server.

Ghostcat, CVE-2020-1938, is essentially a default configuration issue. AJP was never designed to be exposed to untrusted clients, but the default Tomcat configuration enables the AJP connector and binds it to all interfaces. An attacker can craft an AJP request that allows them to read the raw contents of webapp files. This means database credentials, configuration files, and more. If the application is configured to allow file uploads, and that upload location is in the folder accessible to the attacker, the result is a full remote code execution exploit chain for any attacker.

The official recommendation is to disable AJP if you’re not using it, or bind it to localhost if you must use it. At this point, it’s negligence to leave ports exposed to the internet that aren’t being used.

Have I Been P0wned

You may remember our coverage of [Troy Hunt] over at haveibeenpwned.com. He had made the decision to sell HIBP, as a result of the strain of running the project solo for years. In a recent blog post, [Troy] reveals the one thing more exhausting that running HIBP: trying to sell it. After a potential buyer was chosen, and the deal was nearly sealed, the potential buyer went through a restructuring. At the end of the day, the purchase no longer made sense for either party, and they both walked away, leaving HIBP independent. It sounds like the process was stressful enough that HIBP will remain a independent entity for the foreseeable future.

You Were Warned

Remember the Microsoft Exchange vulnerability from last week? Attack tools have been written, and the internet-wide scans have begun.

Ridl Me This, Chrome

We’ve seen an abundance of speculative execution vulnerabilities over the last couple of years. While these problems are technically interesting, there has been a bit of a shortage of real-world attacks that leverage those vulnerabilities. Well, thanks to a post over at Google’s Project Zero, that dearth has come to an end. This attack is a sandbox escape, meaning it requires a vulnerability in the Chrome JS engine to be able to pull it off.

To understand how Ridl plays into this picture, we have to talk about how the Chrome sandbox works. Each renderer thread runs with essentially zero system privileges, and sends requests through Mojo, an inter-process communication system. Mojo uses a 128 bit numbering system to both identify and secure those IPC endpoints.

Once an attacker has taken over the unprivileged sandbox process, the next step is to figure out the port name of an un-sandboxed Mojo port. The trick is to get that privileged process to access its Mojo port name repeatedly, and then capture an access using Ridl. Once the port is known, the attacker has essentially escaped the sandbox.

The whole read is interesting, and serves as a great example of the sorts of attacks enabled by speculative execution leaks.

Farewell SETI@Home

It was about 21 years ago that Berkley started one of the first projects that would allow you to donate idle computing time to scientific research. In particular, your computer could help crunch data from radio telescopes looking for extraterrestrial life. Want to help? You may be too late. The project is going into hibernation while they focus on analyzing data already processed.

According to the home page:

We’re doing this for two reasons:

1) Scientifically, we’re at the point of diminishing returns; basically, we’ve analyzed all the data we need for now.

2) It’s a lot of work for us to manage the distributed processing of data. We need to focus on completing the back-end analysis of the results we already have, and writing this up in a scientific journal paper.

Continue reading “Farewell SETI@Home”

Dumpster Finds Combined Into 4K Desktop Monitor

Dumpster diving is a time honored tradition in the hacking community. You can find all sorts of interesting hardware in the trash, and sometimes it’s even fully functional. But even the broken gadgets are worth taking back to your lair to strip for parts. If you’re as lucky as [Jamz], you might be able to mash a few devices together and turn them into something usable.

In this case, [Jamz] scored a LG 27UK650 monitor with a cracked display and a Dell OptiPlex 7440 “All-in-One” computer that was DOA. Separately these two pieces of gear were little more than a pile of spare parts waiting to be liberated. But if the control board could be salvaged from the monitor, and the working LCD pulled from the Dell…

After taking everything apart, [Jamz] made a frame for this new Frankenstein monitor using pieces of aluminum channel from the hardware store and 3D printed side panels. With the Dell LCD mounted in the skeletal frame, the control board from the LG monitor was bolted to the back and wired in. Finally the center section of the LG monitor’s back panel was cut out and mounted to the new hybrid display with a 3D printed frame.

Admittedly, these were some pretty solid finds as far as trash goes. You won’t always be so lucky. But if you can keep an open mind, the curb is littered with possibilities. How about some impressive home lighting that started life as a cracked flat screen TV?