Getting A Handle On Meltdown Update Impact, Stay Tuned For Spectre

When news broke on Meltdown and Spectre ahead of the original disclosure plan, word spread like wildfire and it was hard to separate fact from speculation. One commonly repeated claim was that the fix would slow down computers by up to 30% for some workloads. A report released by Microsoft today says that “average users” with post-2015 hardware won’t notice the difference. Without getting into specific numbers, they mention that they expect folks running pre-2015 hardware to experience noticeable slowdowns with the patches applied.

The impact from Meltdown updates are easier to categorize: they slow down the transition from an user’s application level code to system level kernel code. The good news: such transitions were already a performance killjoy before Meltdown came along. There exists an extensive collection of tools (design patterns, libraries, and APIs) to help software developers reduce the number of user-kernel transitions.

Performance sensitive code that were already written to minimize kernel transitions will suffer very little from Meltdown updates. This includes most games and mainstream applications. The updates will have a greater impact on the minority of applications that frequently jump between kernel and user worlds. Antivirus software (with their own problems) have reasons to do so, and probably will end up causing most of the slowdowns seen by normal users.

Servers, with their extensive disk and networking IO — and thus kernel usage — are going to have a much worse time, even as seen through Microsoft’s rosy spectacles. So much so that Microsoft is recommending that admins “balance the security versus performance tradeoff for your environment”.

The impact from Spectre updates are harder to pin down. Speculative execution and caching are too important in modern CPUs to “just” turn off. The fixes will be more complex and we’ll have to wait for them to roll out (bumps and all) before we have a better picture.

The effects might end up being negligible as some tech titans are currently saying, and that probably will fit your experience, unless you’re running a server farm. But even if they’re wrong, you’ll still be comfortably faster than an Intel 486 or a Raspberry Pi.

Do any of you have numbers yet?

[via The Verge]

 

Intel Rolls Out 49 Qubits

With a backdrop of security and stock trading news swirling, Intel’s [Brian Krzanich] opened the 2018 Consumer Electronics Show with a keynote where he looked to future innovations. One of the bombshells: Tangle Lake; Intel’s 49-qubit superconducting quantum test chip. You can catch all of [Krzanch’s] keynote in replay and there is a detailed press release covering the details.

This puts Intel on the playing field with IBM who claims a 50-qubit device and Google, who planned to complete a 49-qubit device. Their previous device only handled 17 qubits. The term qubit refers to “quantum bits” and the number of qubits is significant because experts think at around 49 or 50 qubits, quantum computers won’t be practical to simulate with conventional computers. At least until someone comes up with better algorithms. Keep in mind that — in theory — a quantum computer with 49 qubits can process about 500 trillion states at one time. To put that in some apple and orange perspective, your brain has fewer than 100 billion neurons.

Of course, the number of qubits isn’t the entire story. Error rates can make a larger number of qubits perform like fewer. Quantum computing is more statistical than conventional programming, so it is hard to draw parallels.

We’ve covered what quantum computing might mean for the future. If you want to experiment on a quantum computer yourself, IBM will let you play on a simulator and on real hardware. If nothing else, you might find the beginner’s guide informative.

Image credit: [Walden Kirsch]/Intel Corporation

Lowering JavaScript Timer Resolution Thwarts Meltdown And Spectre

The computer security vulnerabilities Meltdown and Spectre can infer protected information based on subtle differences in hardware behavior. It takes less time to access data that has been cached versus data that needs to be retrieved from memory, and precisely measuring time difference is a critical part of these attacks.

Our web browsers present a huge potential surface for attack as JavaScript is ubiquitous on the modern web. Executing JavaScript code will definitely involve the processor cache and a high-resolution timer is accessible via browser performance API.

Web browsers can’t change processor cache behavior, but they could take away malicious code’s ability to exploit them. Browser makers are intentionally degrading time measurement capability in the API to make attacks more difficult. These changes are being rolled out for Google Chrome, Mozilla Firefox, Microsoft Edge and Internet Explorer. Apple has announced Safari updates in the near future that is likely to follow suit.

After these changes, the time stamp returned by performance.now will be less precise due to lower resolution. Some browsers are going a step further and degrade the accuracy by adding a random jitter. There will also be degradation or outright disabling of other features that can be used to infer data, such as SharedArrayBuffer.

These changes will have no impact for vast majority of users. The performance API are used by developers to debug sluggish code, the actual run speed is unaffected. Other features like SharedArrayBuffer are relatively new and their absence would go largely unnoticed. Unfortunately, web developers will have a harder time tracking down slow code under these changes.

Browser makers are calling this a temporary measure for now, but we won’t be surprised if they become permanent. It is a relatively simple change that blunts the immediate impact of Meltdown/Spectre and it would also mitigate yet-to-be-discovered timing attacks of the future. If browser makers offer a “debug mode” to restore high precision timers, developers could activate it just for their performance tuning work and everyone should be happy.

This is just one part of the shock wave Meltdown/Spectre has sent through the computer industry. We have broader coverage of the issue here.

Let’s Talk Intel, Meltdown, And Spectre

This week we’ve seen a tsunami of news stories about a vulnerability in Intel processors. We’re certain that by now you’ve heard of (and are maybe tired of hearing about) Meltdown and Spectre. However, as a Hackaday reader, you are likely the person who others turn to when they need to get the gist of news like this. Since this has bubbled up in watered-down versions to the highest levels of mass media, let’s take a look at what Meltdown and Spectre are, and also see what’s happening in the other two rings of this three-ring circus.

Meltdown and Spectre in a Nutshell

These two attacks are similar. Meltdown is specific to Intel processors and kernel fixes (basically workarounds implemented by operating systems) will result in a 5%-30% speed penalty depending on how the CPU is being used. Spectre is not limited to Intel, but also affects AMD and ARM processors and kernel fixes are not expected to come with a speed penalty.

Friend of Hackaday and security researcher extraordinaire Joe Fitz has written a superb layman’s explanation of these types of attacks. His use of the term “layman” may be a little more high level than normal — this is something you need to read.

The attack exploits something called branch prediction. To boost speed, these processors keep a cache of past branch behavior in memory and use that to predict future branching operations. Branch predictors load data into memory before checking to see if you have permissions to access that data. Obviously you don’t, so that memory will not be made available for you to read. The exploit uses a clever guessing game to look at other files also returned by the predictor to which you do have access. If you’re clever enough, you can reconstruct the restricted data by iterating on this trick many many times.

For the most comprehensive info, you can read the PDF whitepapers on Meltdown and Spectre.

Update: Check Alan Hightower’s explanation of the Meltdown exploit left as a comment below. Quite good for helping deliver better understanding of how this works.

Frustration from Kernel Developers

These vulnerabilities are in silicon — they can’t be easily fixed with a microcode update which is how CPU manufacturers usually workaround silicon errata (although this appears to be an architectural flaw and not errata per se). An Intel “fix” would amount to a product recall. They’ve already said they won’t be doing a recall, but how would that work anyway? What’s the lead time on spinning up the fabs to replace all the Intel chips in use — yikes!

So the fixes fall on the operating systems at the kernel level. Intel should be (and probably is behind the scenes) bowing down to the kernel developers who are saving their bacon. It is understandably frustrating to have to spend time and resources patching these vulnerabilities, which displaces planned feature updates and improvements. Linus Torvalds has been throwing shade at Intel — anecdotal evidence of this frustration:

“I think somebody inside of Intel needs to really take a long hard look at their CPU’s, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed.”

That’s the tamest part of his message posted on the Linux Kernel Mailing List.

Stock Sales Kerfuffle is Just a Distraction

The first thing I did on hearing about these vulnerabilities on Tuesday was to check Intel’s stock price and I was surprised it hadn’t fallen much. In fact, peak to peak it’s only seen about an 8% drop this week and has recovered some from that low.

Of course, it came out that back in November Intel’s CEO Bryan Krzanich sold off his Intel stock to the tune of $24 Million, bringing him down to his contractual minimum of shares. He likely knew about Meltdown when arranging that sale. Resist the urge to flame on this decision. Whether it’s legal or not, hating on this guy is just a distraction.

What’s more interesting to me is this: Intel is too big to fail. What are we all going to do, stop using Intel and start using something else? You can’t just pull the chip and put a new one in, in the case of desktop computers you need a new motherboard plus all the supporting stuff like memory. For servers, laptops, and mobile devices you need to replace the entire piece of equipment. Intel has a huge market share, and silicon has a long production cycle. Branch prediction has been commonplace in consumer CPUs going back to 1995 when the Pentium Pro brought it to the x86 architecture. This is a piece of the foundation that will be yanked out and replaced with new designs that provide the same speed benefits without the same risks — but that will take time to make it into the real world.

CPUs are infrastructure and this is the loudest bell to date tolling to signal how important their design is to society. It’s time to take a hard look at what open silicon design would bring to the table. You can’t say this would have been prevented with Open design. You can say that the path to new processors without these issues would be a shorter one if there were more than two companies producing all of the world’s processors — both of which have been affected by these vulnerabilities.

What You Need To Know About The Intel Management Engine

Over the last decade, Intel has been including a tiny little microcontroller inside their CPUs. This microcontroller is connected to everything, and can shuttle data between your hard drive and your network adapter. It’s always on, even when the rest of your computer is off, and with the right software, you can wake it up over a network connection. Parts of this spy chip were included in the silicon at the behest of the NSA. In short, if you were designing a piece of hardware to spy on everyone using an Intel-branded computer, you would come up with something like the Intel Managment Engine.

Last week, researchers [Mark Ermolov] and [Maxim Goryachy] presented an exploit at BlackHat Europe allowing for arbitrary code execution on the Intel ME platform. This is only a local attack, one that requires physical access to a machine. The cat is out of the bag, though, and this is the exploit we’ve all been expecting. This is the exploit that forces Intel and OEMs to consider the security implications of the Intel Management Engine. What does this actually mean?

Continue reading “What You Need To Know About The Intel Management Engine”

Another Defeat Of The Intel Management Engine

If you have a computer with an Intel processor that’s newer than about 2007, odds are high that it also contains a mystery software package known as the Intel Management Engine (ME). The ME has complete access to the computer below the operating system and can access a network, the computer’s memory, and many other parts of the computer even when the computer is powered down. If you’re thinking that this seems like an incredible security vulnerability then you’re not alone, and a team at Black Hat Europe 2017 has demonstrated yet another flaw in this black box (PDF), allowing arbitrary code execution and bypassing many of the known ME protections.

[Mark Ermolov] and [Maxim Goryachy] are the two-man team that discovered this exploit, only the second of its kind in the 12 years that the ME has been deployed. Luckily, this exploit can’t be taken advantage of (yet) unless an attacker has physical access to the device. Intel’s firmware upgrades also do not solve the problem because the patches still allow for use of older versions of the ME. [Mark] and [Maxim] speculate in their presentation that this might be fixed on the next version of the ME, but also note that these security vulnerabilities would disappear if Intel would stop shipping processors with the ME.

We won’t hold our breath on Intel doing the right thing by eliminating the ME, though. It’s only a matter of time before someone discovers a zero-day (if they haven’t already, there’s no way to know) which could cripple pretty much every computer built within the last ten years. If you’re OK with using legacy hardware, though, it is possible to eliminate the management engine and have a computer that doesn’t have crippling security vulnerabilities built into it. This post was even written from one. Good luck doing anything more resource-intensive with it, though.

(Nearly) All Your Computers Run MINIX

Are you reading this on a machine running a GNU/Linux distribution? A Windows machine? Or perhaps an Apple OS? It doesn’t really matter, because your computer is probably running MINIX anyway.

There once was a time when microprocessors were relatively straightforward devices, capable of being understood more or less in their entirety by a single engineer without especially God-like skills. They had buses upon which hung peripherals, and for code to run on them, one of those peripherals had better supply it.

A modern high-end processor is a complex multicore marvel of technological achievement, so labyrinthine in fact that unlike those simple devices of old it may need to contain a dedicated extra core whose only job is to manage the rest of the onboard functions. Intel processors have had one for years, it’s called the Management Engine, or ME, and it has its own firmware baked into the chip. It is this firmware, that according to a discovery by [Ronald Minnich], contains a copy of the MINIX operating system.

If you are not the oldest of readers, it’s possible that you may not have heard of MINIX. Or if you have, it might be in connection with the gestation of [Linus Torvalds]’ first Linux kernel. It’s a UNIX-like operating system created in the 1980s as a teaching aid, and for a time it held a significant attraction as the closest you could get to real UNIX on some of the affordable 16-bit desktop and home computers. Amiga owners paid for copies of it on floppy disks, it was even something of an object of desire. It’s still in active development, but it’s fair to say its attraction lies in its simplicity rather than its sophistication.

It’s thus a worry to find it on the Intel ME, because in that position it lies at the most privileged level of access to your computer’s hardware. Your desktop operating system, by contrast, sees the hardware through several layers of abstraction in the name of security, so a simple OS with full networking and full hardware access represents a significant opportunity to anyone with an eye to compromising it. Placing tinfoil hats firmly on your heads as the unmistakable thwop of black helicopters eases into the soundscape you might claim that this is exactly what they want anyway. We would hope that if they wanted to compromise our PCs with a backdoor they’d do it in such a way as to make it a little less easy for The Other Lot. We suspect it’s far more likely that this is a case of the firmware being considered to be an out-of-sight piece of the hardware that nobody would concern themselves with, rather than a potential attack vector that everyone should. It would be nice to think that we’ll see some abrupt updates, but we suspect that won’t happen.

Intel I7 processor underside: smial [FAL].