The Most Secure, Modern Computer Might Be A Mac

The Linux world is currently seeing an explosion in new users, thanks in large part to Microsoft turning its Windows operating system into the most intrusive piece of spyware in modern computing. For those who value privacy and security, Linux has long been the safe haven where there’s reasonable certainty that the operating system itself isn’t harvesting user data or otherwise snooping where it shouldn’t be. Yet even after solving the OS problem, a deeper issue remains: the hardware itself. Since around 2008, virtually every Intel and AMD processor has included coprocessors running closed-source code known as the Intel Management Engine (IME) or AMD Platform Security Processor (PSP).

M1 MacBook Air, now with more freedom

These components operate entirely outside the user’s and operating system’s control. They are given privileged access to memory, storage, and networking and can retain that access even when the CPU is not running, creating systemic vulnerabilities that cannot be fully mitigated by software alone. One practical approach to minimizing exposure to opaque management subsystems like the IME or PSP is to use platforms that do not use x86 hardware in the first place. Perhaps surprisingly, the ARM-based Apple M1 and M2 computers offer a compelling option, providing a more constrained and clearly defined trust model for Linux users who prioritize privacy and security.

Before getting into why Apple Silicon can be appealing for those with this concern, we first need to address the elephant in the room: Apple’s proprietary, closed-source operating system. Luckily, the Asahi Linux project has done most of the heavy lifting for those with certain Apple Silicon machines who want to go more open-source. In fact, Asahi is one of the easiest Linux installs to perform today even when compared to beginner-friendly distributions like Mint or Fedora, provided you are using fully supported M1 or M2 machines rather than attempting an install on newer, less-supported models. The installer runs as a script within macOS, eliminating the need to image a USB stick. Once the script is executed, the user simply follows the prompts, restarts the computer, and boots into the new Linux environment. Privacy-conscious users may also want to take a few optional steps, such as verifying the Asahi checksum and encrypting the installation with LUKS but these steps are not too challenging for experienced users.

Black Boxes

Changing the operating system on modern computers is the easy part, though. The hard part is determining exactly how much trust should be placed in the underlying hardware and firmware of any given system, and then deciding what to do to make improvements. This is where Apple Silicon starts to make a compelling case compared to modern x86 machines. Rather than consolidating a wide range of low-level functionality into a highly privileged black box like the IME or PSP, Apple splits these responsibilities more narrowly, with components like the Secure Enclave focusing on specific security functions instead of being given broad system access.

Like many modern systems, Apple computers include a dedicated security coprocessor alongside the main CPU, known as the Secure Enclave Processor (SEP). It runs a minimal, hardened operating system called sepOS and is isolated from the rest of the system. Its primary roles include securely storing encryption keys, handling sensitive authentication data, and performing cryptographic operations. This separation helps ensure that even if the main operating system is compromised, secrets managed by the SEP remain protected.

The Chain of Trust

To boot an Apple Silicon computer, a “chain of trust” is followed in a series of steps, each of which verifies the previous step. This is outlined in more detail in Apple’s documentation, but starts with an immutable boot ROM embedded in the system-on-chip during manufacturing. It first verifies early boot stages, including the low-level bootloader and iBoot, which in turn authenticate and verify the operating system kernel and system image before completing the boot process. If any of these verification steps fail, the system halts booting to prevent unauthorized or compromised code from executing.

Perhaps obvious at this point is that Apple doesn’t sign Asahi Linux images. But rather than allowing unrestricted execution like many PCs, or fully locking down the device like a smartphone, Apple’s approach takes a middle way. They rely on another critical piece of “security hardware” required to authorize that third-party OS: a human user. The Asahi Linux documentation discusses this in depth, but Apple’s secure boot system allows the owner of the computer to explicitly authorize additional operating systems by creating a custom boot policy within the user-approved trust chain. In practice, this means that the integrity of the boot process is still enforced, but the user ultimately decides what is trusted. If a boot component is modified outside of this trust chain, the system will refuse to execute it. In contrast to this system, where secure boot is enforced by default and only relaxed through explicit user action, x86 systems can treat these protections as optional. A motivated x86 user can achieve a comparable level of security, but they must assemble and maintain it themselves, as well as figure it out in the first place.

Reducing the Attack Surface

The limited scope of Apple’s Secure Enclave gives it a much smaller attack surface compared to something like the Intel Management Engine. As mentioned before, the IME combines a wider range of functionality, including features designed for low-level remote system management. This broader scope increases its complexity and, by extension, its attack surface which has led to several high-profile vulnerabilities. Apple’s Secure Enclave, by contrast, is designed with a much narrower focus. That’s not to say it’s a perfect, invulnerable system since it’s also a closed-source black box, but its limited responsibilities inherently reduce that attack surface.

It’s also worth mentioning that there are a few other options for those who insist on x86 hardware or who refuse to trust Apple even in the most minimal amount, but who still consider the IME and its equivalents as unacceptable security risks. Some hardware manufacturers like NovaCustom and even Dell have given users the option of disabling the IME (although this doesn’t remove it entirely), and some eight and ninth generation Intel machines can have their management engines partially disabled by the user as well. In fact these are the computers that my own servers are based on for this reason alone. Going even further, it is possible to get a 2018-era Thinkpad to run the open-source libreboot firmware. However, libreboot installations can become extremely cumbersome, and even then you’ll be left with a computer that lacks the performance-per-watt and GPU capabilities of even the lowest-tier M1 machines. In my opinion, this compromise of placing a kernel of trust in Apple is the lesser evil for most people in most situations, at least until libreboot is able to support more modern machines and/or until the libreboot installation process is able to be streamlined.

I’ll also note here that Apple is far from a perfect company. Their walled garden approach is inherently anti-consumer, and they’ve rightly taken some criticism for inflating hardware costs, deliberately making their computers difficult to repair, enforcing arbitrary divisions between different classes of products to encourage users to buy more devices, and maintaining a monopolistic and increasingly toxic app store.

But buying an M1 or M2 machine on the used market won’t directly give Apple any money, and beyond running the Asahi installer script doesn’t require interacting with any Apple software or their ecosystem in any way, beyond the initial installation. I’ve argued in the past that older Apple computers make excellent Linux machines for these reasons as well, and since the M1 and M2 machines eliminate the IME risk of these older computers they’re an even better proposition, even without considering the massive performance gains possible.

Ultimately, though, the best choice of hardware depends on one’s threat model and priorities. If the goal is to minimize exposure to IME/PSP-level risks while retaining semi-modern performance, an M1/M2 Mac with Asahi Linux is one of the best options available today. But if fully open hardware is non-negotiable, you’ll need to accept older or less powerful machines… for now.

From Zip To Nought: The Rise And Fall Of Iomega

If you were anywhere near a computer in the mid-to-late 1990s, you almost certainly encountered a Zip drive. That distinctive purple peripheral, with its satisfying clunk as you slotted in a cartridge, was as much a fixture of the era as beige tower cases and CRT monitors. Iomega, the company behind it, went from an obscure Utah outfit to a multi-billion-dollar darling of Wall Street in the span of about two years. And then, almost as quickly, it all fell apart.

The story of Iomega is one of genuine engineering innovation and the fickle nature of consumer technology. As with so many other juggernauts of its era, Iomega was eventually brought down by a new technology that simply wasn’t practical to counter.

The House That Bernoulli Built

Iomega was founded in Utah, in 1980, by Jerome Paul Johnson, David Bailey, and David Norton. The company soon developed a novel approach to removable magnetic storage based on the Bernoulli effect. The Bernoulli Box arrived in 1982, which was a drive relying on PET film disks spun at 1500 RPM inside a rigid, removable cartridge. The airflow generated by the spinning disk pulled the media down toward the read/write head thanks to the eponymous Bernoulli effect. While spinning, the disk would float a mere micron above the head surface on a cushion of air. If the power cut out or the drive otherwise failed, the disk simply floated away from the head rather than crashing into it—a boon over contemporary hard drives for which head crashes were a real risk. The Bernoulli Box made them essentially impossible. Continue reading “From Zip To Nought: The Rise And Fall Of Iomega”

The Zero-Power Flight Computer

In the early days of aviation, pilots or their navigators used a plethora of tools to solve common navigation and piloting problems. There was definitely a need for some kind of computing aid that could replace slide rules, tables, and tedious dead-reckoning computations. This would become even more important during World War II, when there was a massive push to quickly train young men to be pilots.

The same, but different. A Pickett slide rule (top) and an E6B slide rule (bottom). (Own Work).

Today, we’d whip up some sort of computer device, but in the 1930s, computers weren’t anything you’d cram on a plane, even if they’d had any. For example, the Mark 1 Fire Control Computer during WW2 was 3,000 pounds of gears and motors.

The computer is made to answer flight questions like “how many pounds of fuel do I need for another hour of flying time?” or “How do I adjust my course if I have a particular crosswind?”

History

There were a rash of flight computers starting in the 1920s that were essentially specialized slide rules. The most popular one appeared in the late 1930s. Philip Dalton’s circular slide rule was cheap to produce and easy to use. As you’ll see, it is more than just an ordinary slide rule. Keep in mind, these were not computers in the sense we think of today. They were simple slide rules that easily did specialized math useful to pilots.

Dalton actually developed a number of computers. The popular Model B appeared in 1933, and there were refinements leading to additional models. The Mark VII was very popular. Even Fred Noonan, Amelia Earhart’s navigator, used a Mark VII. Continue reading “The Zero-Power Flight Computer”

Artemis II Agenda Keeps Moon-Bound Crew Busy

With the launch of Artemis II from Cape Canaveral potentially just weeks away, NASA has been releasing a steady stream of information about the mission through their official site and social media channels to get the public excited about the agency’s long-awaited return to the Moon. While the slickly produced videos and artist renderings might get the most attention, even the most mundane details about a flight that will put humans on the far side of our nearest celestial neighbor for the first time since 1972 can be fascinating.

The Artemis II Moon Mission Daily Agenda is a perfect example. Released earlier this week via the NASA blog, the document seems to have been all but ignored by the mainstream media. But the day-by-day breakdown of the Artemis II mission contains several interesting entries about what the four crew members will be working on during the ten day flight.

Of course, the exact details of the agenda are subject to change once the mission is underway. Some tasks could run longer than anticipated, experiments may not go as planned, and there’s no way to predict technical issues that may arise.

Conversely, the crew could end up breezing through some of the planned activities, freeing up time in the schedule. There’s simply no way of telling until it’s actually happening.

With the understanding that it’s all somewhat tentative, a look through the plan as it stands right now can give us an idea of the sort of highlights we can expect as we follow this historic mission down here on Earth.

Continue reading “Artemis II Agenda Keeps Moon-Bound Crew Busy”

The Rise And Fall Of Free Dial Up Internet

In the early days of the Internet, having a high-speed IP connection in your home or even a small business was, if not impossible, certainly a rarity. Connecting to a computer in those days required you to use your phone. Early modems used acoustic couplers, but by the time most people started trying to connect, modems that plugged into your phone jack were the norm.

The problem was: whose computer did you call? There were commercial dial-up services like DIALOG that offered very expensive services, such as database searches via modem. That could be expensive. You had a fee for the phone. Then you might have a per-minute charge for the phone call, especially if the computer was in another city. Then you had to pay the service provider, which could be very expensive.

Even before the consumer Internet, this wasn’t workable. Tymnet and Telenet were two services that had the answer. They maintained banks of modems practically everywhere. You dialed a local number, which was probably a “free” call included in your monthly bill, and then used a simple command to connect to a remote computer of your choice. There were other competitors, including CompuServe, which would become a major force in the fledgling consumer market.

While some local internet service providers (ISPs) had their own modem banks, when you saw the rise of national ISPs, they were riding on one of several nationwide modem systems and paying by the minute for the privilege. Eventually, some ISPs reached the scale that made dedicated modem banks worthwhile. This made it easier to offer flat-rate pricing, and the presumed likelihood of everyone dialing in at once made it possible to oversubscribe any given number of modems.

The Cost

Once consumer services like CompuServe, The Source, and AOL started operations, the cost was less, but still not inexpensive. Some early services charged higher rates during business hours, for example. There was also the cost of a phone line, and if you didn’t want to tie up your home phone, you needed a second line dedicated to the modem. It all added up.

By the late 1990s, a dial-up provider might cost you $25 a month or less, not counting your phone line. That’s about $60 in today’s money, just for reference. But the Internet was also booming as a place to sell advertising.

Continue reading “The Rise And Fall Of Free Dial Up Internet”

Preparing To Fire Up A 90-Year-Old Boiler After Half A Century

Continuing the restoration of the #1 Lancashire boiler at the Claymills Pumping Station in the UK, the volunteers are putting on the final touches after previously passing the boiler inspection. Although it may seem that things are basically ready to start laying down a fire after the boiler is proven to hold 120 PSI with all safeties fully operating, they first had to reassemble the surrounding brickwork, free up a seized damper shaft and give a lot of TLC to mechanisms that were brand new in the 1930s and last operated in 1971.

Removing the ashes from a Lancashire boiler. (Credit: Claymills pumping station, YouTube)
Removing the ashes from a Lancashire boiler. (Credit: Claymills pumping station, YouTube)

The damper shaft is part of the damper mechanism which controls doors that affect the burn rate, acting as a kind of throttle for the boilers. Unfortunately the shaft’s bearings had seized up completely, and no amount of heat and kinetic maintenance could loosen it up again. This forced them to pull it out and manufacture a replacement, but did provide a good look at how it’s put together. The original dial indicator was salvaged, along with some other bits that were still good.

Next was to fit the cast-iron ash boxes that sit below the boiler and from where ash can be scraped out and deposited into wheelbarrows. The automatic sprinkler stokers are fitted above these, with a good look at their mechanism. The operator is given a lot of control over how much coal is being fed into the boiler, as part of the early 20th-century automation.

The missing furnace doors on the #1 boiler were replaced with replicas based on the ones from the other boilers, and some piping around the boiler was refurbished. Even after all that work, it’ll still take a few weeks and a lot more work to fully reassemble the boiler, showing just how complex these systems are. With some luck it’ll fire right back up after fifty years of slumbering and decades of suffering the elements.

Continue reading “Preparing To Fire Up A 90-Year-Old Boiler After Half A Century”

Hacking The System In A Moral Panic: We Need To Talk

It seems that for as long as there have been readily available 3D printers, there have been moral panics about their being used to print firearms. The latest surrounds a Washington State Legislature bill, HB2320, which criminalises the printing of unregistered guns. Perhaps most controversially, it seeks so impose a requirement on printers sold in the state to phone home and check a database of known firearms and refuse to print them when asked.

This has drawn a wave of protest from the 3D printing community, and seems from where we are sitting to be a spectacularly ill-conceived piece of legislation. It’s simply not clear how it could be implemented, given the way 3D printers and slicing software actually work.

Oddly This Isn’t About Firearms

The root of the problem with this bill and others like it lies in ignorance, and a misplaced belief in the power of legislation. Firearms are just the example here, but we can think of others and we’re sure you can too. Legislators aren’t stupid, but by and large they don’t come from technology or engineering backgrounds.

Meanwhile they have voters to keep happy, and therefore when a moral panic like this one arises their priority is to be seen to be doing something about it. They dream up a technically infeasible solution, push to get it written into law, and their job is done. Let the engineers figure out how to make it work. Continue reading “Hacking The System In A Moral Panic: We Need To Talk”