Peering Into a Running Brain: SDRAM Refresh Analyzed from Userspace

Over on the Cloudflare blog, [Marek] found himself wondering about computer memory, as we all sometimes do. Specifically, he pondered if he could detect the refresh of his SDRAM from within a running program. We’re probably not ruining the surprise by telling you that the answer is yes — with a little more than 100 lines of C and help from our old friend the Fast Fourier Transform (FFT), [Marek] was able to detect SDRAM refresh cycles every 7818.6 ns, lining right up with the expected result.

The “D” in SDRAM stands for dynamic, meaning that unless periodically refreshed by reading and writing, data in the memory will decay. In this kind of memory, each bit is stored as a charge on a tiny capacitor. Given enough time (which varies with ambient temperature), this charge can leak away to neighboring silicon, turning all the 1s to 0s, and destroying the data. To combat this process, the memory controller periodically issues a refresh command which reads the data before it decays, then writes the data back to fully charge the capacitors again. Done often enough, this will preserve the memory contents indefinitely. SDRAM is relatively inexpensive and available in large capacity compared to the alternatives, but the drawback is that the CPU can’t access the portion of memory being refreshed, so execution gets delayed a little whenever a memory access and refresh cycle collide.

Chasing the Correct Hiccup

[Marek] figured that he could detect this “hiccup,” as he calls it, by running some memory accesses and recording the current time in a tight loop. Of course, the cache on modern CPUs would mean that for a small amount of data, the SDRAM would never be accessed, so he flushes the cache each time. The source code, which is available on GitHub, outputs the time taken by each iteration of the inner loop. In his case, the loop typically takes around 140 ns.

Hurray! The first frequency spike is indeed what we were looking for, and indeed does correlate with the refresh times.

The other spikes at 256kHz, 384kHz, 512kHz and so on, are multiplies of our base frequency of 128kHz called harmonics. These are a side effect of performing FFT on something like a square wave and totally expected.

As [Marek] notes, the raw data doesn’t reveal too much. After all, there are a lot of things that can cause little delays in a modern multitasking operating system, resulting in very noisy data. Even thresholding and resampling the data doesn’t bring refresh hiccups to the fore. To detect the SDRAM refresh cycles, he turned to the FFT, an efficient algorithm for computing the discrete Fourier transform, which excels at revealing periodicity. A few lines of python produced the desired result: a plot of the frequency spectrum of the lengthened loop iterations. Zooming in, he found the first frequency spike at 127.9 kHz, corresponding to the SDRAMs refresh period of 7.81 us, along with a number of other spikes representing harmonics of this fundamental frequency. To facilitate others’ experiments, [Marek] has created a command line version of the tool you can run on your own machine.

If this technique seems familiar, it may be because it’s similar the the Rowhammer attack we covered back in 2015, which can actually change data in SDRAM on vulnerable machines by rapidly accessing adjacent rows. As [Marek] points out, the fact that you can make these kinds of measurements from a userspace program can have profound security implications, as we saw with the meltdown and spectre attacks. We have to wonder what other vulnerabilities are lying inside our machines waiting to be discovered.

Thanks to [anfractuosity] for the tip!

Foreshadow: The Sky Is Falling Again for Intel Chips

It’s been at least a month or two since the last vulnerability in Intel CPUs was released, but this time it’s serious. Foreshadow is the latest speculative execution attack that allows balaclava-wearing hackers to steal your sensitive information. You know it’s a real 0-day because it already has a domain, a logo, and this time, there’s a video explaining in simple terms anyone can understand why the sky is falling. The video uses ukuleles in the sound track, meaning it’s very well produced.

The Foreshadow attack relies on Intel’s Software Guard Extension (SGX) instructions that allow user code to allocate private regions of memory. These private regions of memory, or enclaves, were designed for VMs and DRM.

How Foreshadow Works

The Foreshadow attack utilizes speculative execution, a feature of modern CPUs most recently in the news thanks to the Meltdown and Spectre vulnerabilities. The Foreshadow attack reads the contents of memory protected by SGX, allowing an attacker to copy and read back private keys and other personal information. There is a second Foreshadow attack, called Foreshadow-NG, that is capable of reading anything inside a CPU’s L1 cache (effectively anything in memory with a little bit of work), and might also be used to read information stored in other virtual machines running on a third-party cloud. In the worst case scenario, running your own code on an AWS or Azure box could expose data that isn’t yours on the same AWS or Azure box. Additionally, countermeasures to Meltdown and Spectre attacks might be insufficient to protect from Foreshadown-NG

The researchers behind the Foreshadow attacks have talked with Intel, and the manufacturer has confirmed Foreshadow affects all SGX-enabled Skylake and Kaby Lake Core processors. Atom processors with SGX support remain unaffected. For the Foreshadow-NG attack, many more processors are affected, including second through eighth generation Core processors, and most Xeons. This is a significant percentage of all Intel CPUs currently deployed. Intel has released a security advisory detailing all the affected CPUs.

Hackaday Links: February 18, 2018

Hacker uses pineapple on unencrypted WiFi. The results are shocking! Film at 11.

Right on, we’ve got some 3D printing cons coming up. The first is MRRF, the Midwest RepRap Festival. It’s in Goshen, Indiana, March 23-25th. It’s a hoot. Just check out all the coverage we’ve done from MRRF over the years. Go to MRRF.

We got news this was going to happen last year, and now we finally have dates and a location. The East Coast RepRap Fest is happening June 22-24th in Bel Air, Maryland. What’s the East Coast RepRap Fest? Nobody knows; this is the first time it’s happening, and it’s not being produced by SeeMeCNC, the guys behind MRRF. There’s going to be a 3D printed Pinewood Derby, though, so that’s cool.

జ్ఞ‌ా. What the hell, Apple?

Defcon’s going to China. The CFP is open, and we have dates: May 11-13th in Beijing. Among the things that may be said: “Hello Chinese customs official. What is the purpose for my visit? Why, I’m here for a hacker convention. I’m a hacker.”

Intel hit with lawsuits over security flaws. Reuters reports Intel shareholders and customers had filed 32 class action lawsuits against the company because of Spectre and Meltdown bugs. Are we surprised by this? No, but here’s what’s interesting: the patches for Spectre and Meltdown cause a noticeable and quantifiable slowdown on systems. Electricity costs money, and companies (server farms, etc) can therefore put a precise dollar amount on what the Spectre and Meltdown patches cost them. Two of the lawsuits allege Intel and its officers violated securities laws by making statements or products that were false. There’s also the issue of Intel CEO Brian Krzanich selling shares after he knew about Meltdown, but before the details were made public. Luckily for Krzanich, the rule of law does not apply to the wealthy.

What does the Apollo Guidance Computer look like? If you think it has a bunch of glowey numbers and buttons, you’re wrong; that’s the DSKY — the user I/O device. The real AGC is basically just two 19″ racks. Still, the DSKY is very cool and a while back, we posted something about a DIY DSKY. Sure, it’s just 7-segment LEDs, but whatever. Now this project is a Kickstarter campaign. Seventy bucks gives you the STLs for the 3D printed parts, BOM, and a PCB. $250 is the base for the barebones kit.

Intel Forms New Security Group to Avoid Future Meltdowns

Intel just moved some high level people around to form a dedicated security group.

When news of Meltdown and Spectre broke, Intel’s public relations department applied maximum power to their damage control press release generators. The initial message was one of defiance, downplaying the impact and implying people are over reacting. This did not go over well. Since then, we’ve started seeing a trickle of information from engineering and even direct microcode updates for people who dare to live on the bleeding edge.

All the technical work to put out the immediate fire is great, but for the sake of Intel’s future they need to figure out how to avoid future fires. The leadership needs to change the company culture away from an attitude where speed is valued over all else. Will the new security group have the necessary impact? We won’t know for quite some time. For now, it is encouraging to see work underway. Fundamental problems in corporate culture require a methodical fix and not a hack.

Editor’s note: We’ve changed the title of this article to better reflect its content: that Intel is making changes to its corporate structure to allow a larger voice for security in the inevitable security versus velocity tradeoff.

Spectre and Meltdown: How Cache Works

The year so far has been filled with news of Spectre and Meltdown. These exploits take advantage of features like speculative execution, and memory access timing. What they have in common is the fact that all modern processors use cache to access memory faster. We’ve all heard of cache, but what exactly is it, and how does it allow our computers to run faster?

In the simplest terms, cache is a fast memory. Computers have two storage systems: primary storage (RAM) and secondary storage (Hard Disk, SSD). From the processor’s point of view, loading data or instructions from RAM is slow — the CPU has to wait and do nothing for 100 cycles or more while the data is loaded. Loading from disk is even slower; millions of cycles are wasted. Cache is a small amount of very fast memory which is used to hold commonly accessed data and instructions. This means the processor only has to wait for the cache to be loaded once. After that, the data is accessible with no waiting.

A common (though aging) analogy for cache uses books to represent data: If you needed a specific book to look up an important piece of information, you would first check the books on your desk (cache memory). If your book isn’t there, you’d then go to the books on your shelves (RAM). If that search turned up empty, you’d head over to the local library (Hard Drive) and check out the book. Once back home, you would keep the book on your desk for quick reference — not immediately return it to the library shelves. This is how cache reading works.

Continue reading “Spectre and Meltdown: How Cache Works”

Meltdown Code Proves Concept

If you’ve read about Meltdown, you might have thought, “how likely is that to actually happen?” You can more easily judge for yourself by looking at the code available on GitHub. The Linux software is just proof of concept, but it both shows what could happen and — in a way — illustrates some of the difficulties in making this work. There are also two videos in the repository that show spying on password input and dumping physical memory.

The interesting thing is that there are a lot of things that will stop the demos from working. For example a slow CPU, a CPU without out-of-order execution, or an imprecise high-resolution timer. This is apparently especially problematic in virtual machines.

Continue reading “Meltdown Code Proves Concept”

Spectre and Meltdown: Attackers Always Have The Advantage

While the whole industry is scrambling on Spectre, Meltdown focused most of the spotlight on Intel and there is no shortage of outrage in Internet comments. Like many great discoveries, this one is obvious with the power of hindsight. So much so that the spectrum of reactions have spanned an extreme range. From “It’s so obvious, Intel engineers must be idiots” to “It’s so obvious, Intel engineers must have known! They kept it from us in a conspiracy with the NSA!”

We won’t try to sway those who choose to believe in a conspiracy that’s simultaneously secret and obvious to everyone. However, as evidence of non-obviousness, some very smart people got remarkably close to the Meltdown effect last summer, without getting it all the way. [Trammel Hudson] did some digging and found a paper from the early 1990s (PDF) that warns of the dangers of fetching info into the cache that might cross priviledge boundaries, but it wasn’t weaponized until recently. In short, these are old vulnerabilities, but exploiting them was hard enough that it took twenty years to do it.

Building a new CPU is the work of a large team over several years. But they weren’t all working on the same thing for all that time. Any single feature would have been the work of a small team of engineers over a period of months. During development they fixed many problems we’ll never see. But at the end of the day, they are only human. They can be 99.9% perfect and that won’t be good enough, because once hardware is released into the world: it is open season on that 0.1% the team missed.

The odds are stacked in the attacker’s favor. The team on defense has a handful of people working a few months to protect against all known and yet-to-be discovered attacks. It is a tough match against the attackers coming afterwards: there are a lot more of them, they’re continually refining the state of the art, they have twenty years to work on a problem if they need to, and they only need to find a single flaw to win. In that light, exploits like Spectre and Meltdown will probably always be with us.

Let’s look at some factors that paved the way to Intel’s current embarrassing situation.

Continue reading “Spectre and Meltdown: Attackers Always Have The Advantage”