This Week In Security: Pacman, Hertzbleed, And The Death Of Internet Explorer

There’s not one, but two side-channel attacks to talk about this week. Up first is Pacman, a bypass for ARM’s Pointer Authentication Code. PAC is a protection built into certain ARM Processors, where a cryptographic hash value must be set correctly when pointers are updated. If the hash is not set correctly, the program simply crashes. The idea is that most exploits use pointer manipulation to achieve code execution, and correctly setting the PAC requires an explicit instruction call. The PAC is actually indicated in the unused bits of the pointer itself. The AArch64 architecture uses 64-bit values for addressing, but the address space is much less than 64-bit, usually 53 bits or less. This leaves 11 bits for the PAC value. Keep in mind that the application doesn’t hold the keys and doesn’t calculate this value. 11 bits may not seem like enough to make this secure, but keep in mind that every failed attempt crashes the program, and every application restart regenerate the keys.

What Pacman introduces is an oracle, which is a method to gain insight on data the attacker shouldn’t be able to see. In this case, the oracle works via speculation attacks, very similar to Meltdown and Spectre. The key is to attempt a protected pointer dereference speculatively, and to then observe the change in system state as a result. What you may notice is that this requires an attack to already be running code on the target system, in order to run the PAC oracle technique. Pacman is not a Remote Code Execution flaw, nor is it useful in gaining RCE.

One more important note is that an application has to have PAC support compiled in, in order to benefit from this protection. The platform that has made wide use of PAC is MacOS, as it’s a feature baked in to their M1 processor. The attack chain would likely start with a remote execution bug in an application missing PAC support. Once a foothold is established in uprivileged userspace, Pacman would be used as part of an exploit against the kernel. See the PDF paper for all the details.

Continue reading “This Week In Security: Pacman, Hertzbleed, And The Death Of Internet Explorer”

This Week In Security: For The Horde, Feature Not A Bug, And Confluence

If you roll way back through the history of open source webmail projects, you’ll find Horde, a groupware web application. First released in 1998 on Freshmeat, it gained some notoriety in early 2012 when it was discovered that the 3.0 release had been tampered with, and packages containing a backdoor had been shipped for three months. While this time around it isn’t an intentional backdoor, there is a very serious problem in the Horde webmail interface. Or more accurately, a pair of problems. The most serious is CVE-2022-30287, an RCE bug allowing an authenticated user to trigger code execution on the connected server.

The vulnerable element is the Turba address book module, which uses a PHP factory method to access a specific address book. The create() method has an interesting bit of code, that first checks the initialization value. If it’s a string, that value is understood as the name of the local address book to access. However, if the factory is initialized with an array, any of the address book drivers can be used, including the IMSP driver. IMSP fetches serialized data from remote servers, and deserializes it. And yes, PHP can have deserialization bugs, and this one runs code on the host.

But it’s not that bad, it’s only authenticated users, right? That would be bad enough, but that second bug is a Cross-site Request Forgery, CSRF, triggered by viewing an email. So on a vulnerable Horde server, any user viewing a malicious message would trigger RCE on the server. Oof. So let’s talk fixes. There is a new version of the Turba module that seems to fix the bugs, but it’s not clear that the actual Horde suite has pushed an update that includes it. So you may be on your own. As is pointed out on the Sonar Blog where the vulnerability was discovered, Horde itself seems to be essentially unmaintained at this point. Maybe time to consider migrating to a newer platform.
Continue reading “This Week In Security: For The Horde, Feature Not A Bug, And Confluence”

STEM Award Goes To Accessible 3D Printing Project

When you are a 15-year old and you see a disabled student drop the contents of their lunch tray while walking to a table, what do you do? If you are [Adaline Hamlin], you design a 3D printed attachment for the trays to stop it from happening again.

The work was part of “Genius Hour” where [Hamlin’s] teacher encouraged students to find things that could be created to benefit others. An initial prototype used straws to form stops to fit plates, cups, and whatever else fit on the tray. [Zach Lance], a senior at the school’s 3D printing club, helped produce the actual 3D printed pieces.

Continue reading “STEM Award Goes To Accessible 3D Printing Project”

The World’s Most Expensive 3D Printers

How much would you pay for a 3D printer? Granted, when we started a decent printer might run over $1,000 but the cost has come way down. Unless of course, you go pro. We were disappointed that this [All3DP] post didn’t include prices, but we noticed a trend: if your 3D printer has stairs, it is probably a big purchase. According to the tag line on the post, the printers are all north of $500,000.

Expensive printers usually have unique technology, higher degrees of automation, large capacity or some combination of that, and a few other factors. At least two of the printers mentioned had stairs to reach the top parts of the machine. And the Black Buffalo — a cement printer — uses a gantry that looks like it is part of a light show at a concert. It is scalable, but apparently can go up to three stories tall!

Continue reading “The World’s Most Expensive 3D Printers”

The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

That’s No Asteroid…Oh, Actually It Is

How important is it to identify killer asteroids before they strike your planet? Ask any dinosaurs. Oh, wait… Granted you also need a way to redirect them, but interest in finding them has picked up lately including a new privately funded program called the Asteroid Institute.

Using an open-source cloud platform known as ADAM — Asteroid  Discovery Analysis and Mapping — the program,  affiliated with B612 program along with others including the University of Washington, has already discovered 104 new asteroids and plotted their orbits.

What’s interesting is that the Institute doesn’t acquire any images itself. Instead, it uses new techniques to search through existing optical records to identify previously unnoticed asteroids and compute their trajectories.

You have to wonder how many other data sets are floating around that hold unknown discoveries waiting for the right algorithm and computing power. Of course, once you find the next extinction asteroid, you have to decide what to do about it. Laser? Bomb? A gentle push at a distance? Or hope for an alien obelisk to produce a deflector ray? How would you do it?

NASA is experimenting with moving asteroids. If you want to find some on your own, you might want to check out the atlas of existing ones.

Continue reading “That’s No Asteroid…Oh, Actually It Is”

Optimizing Linux Pipes

In CPU design, there is Ahmdal’s law. Simply put, it means that if some process is contributing to 10% of your execution, optimizing it can’t improve things by more than 10%. Common sense, really, but it illustrates the importance of knowing how fast or slow various parts of your system are. So how fast are Linux pipes? That’s a good question and one that [Mazzo] sets out to answer.

The inspiration was a highly-optimized fizzbuzz program that clocked in at over 36GB/s on his laptop. Is that a common speed? Nope. A simple program using pipes on the same machine turned in not quite 4 GB/s. What accounts for the difference?

Continue reading “Optimizing Linux Pipes”