The computer security vulnerabilities Meltdown and Spectre can infer protected information based on subtle differences in hardware behavior. It takes less time to access data that has been cached versus data that needs to be retrieved from memory, and precisely measuring time difference is a critical part of these attacks.
Our web browsers present a huge potential surface for attack as JavaScript is ubiquitous on the modern web. Executing JavaScript code will definitely involve the processor cache and a high-resolution timer is accessible via browser performance API.
Web browsers can’t change processor cache behavior, but they could take away malicious code’s ability to exploit them. Browser makers are intentionally degrading time measurement capability in the API to make attacks more difficult. These changes are being rolled out for Google Chrome, Mozilla Firefox, Microsoft Edge and Internet Explorer. Apple has announced Safari updates in the near future that is likely to follow suit.
After these changes, the time stamp returned by performance.now
will be less precise due to lower resolution. Some browsers are going a step further and degrade the accuracy by adding a random jitter. There will also be degradation or outright disabling of other features that can be used to infer data, such as SharedArrayBuffer
.
These changes will have no impact for vast majority of users. The performance
API are used by developers to debug sluggish code, the actual run speed is unaffected. Other features like SharedArrayBuffer
are relatively new and their absence would go largely unnoticed. Unfortunately, web developers will have a harder time tracking down slow code under these changes.
Browser makers are calling this a temporary measure for now, but we won’t be surprised if they become permanent. It is a relatively simple change that blunts the immediate impact of Meltdown/Spectre and it would also mitigate yet-to-be-discovered timing attacks of the future. If browser makers offer a “debug mode” to restore high precision timers, developers could activate it just for their performance tuning work and everyone should be happy.
This is just one part of the shock wave Meltdown/Spectre has sent through the computer industry. We have broader coverage of the issue here.
Somehow this seems sad, to have to be forced to have computers run fuzzy time because of buggy hardware and nasty abusers of such.
But..perhaps someone will make a nice hardware hack for HaD that skews your computer clock around.
And talking of clocks and intel and conspiracy, didn’t intel (who people suggest knew about this bug but didn’t fix it to assisted certain government org in their hack) also introduce that ‘high precision timer’ that was introduced to computers and which you initially could disable in the BIOS? Are things coming together? Or is it all a coincidence? you tell me.
“The High Precision Event Timer (HPET) is a hardware timer used in personal computers. It was developed jointly by Intel and Microsoft and has been incorporated in PC chipsets since circa 2005”
https://en.wikipedia.org/wiki/High_Precision_Event_Timer
Ah the first time I heard about HPET was in a second hand AMD laptop from the years when I’d go through plenty of cheap rubbish snap-tops (Before I discovered “Business grade” metal chassis laptops existed).
The HPET on that had a hardware bug where the kernel would stall then panic at seemingly random intervals when left alone (i.e. playing music whilst I cook food).
I stopped the kernel panics by disabling the NMI-watchdog panic on the boot command-line.
This resulted in the laptop in a stall-start-stall-start cycle. I found interrupts generated would bring it out of stalling.
I looked up a list of timers that generated interrupts, can’t remember what one I picked but I worked around the HPET bug by resetting upon another timer and called the same functions as was being called by the HPET refresh routines (Something like that, iow: I just looked for HPET and APIC references in the sources, done a bodge job, debugged for a few hours and compiled).
It seems the HPET timer was added to help with event timing for a more smoother UI feel and to help sync hardware to the UI better, i.e. not going laggy whilst copying 80GB of films music and Linux live ISOs by allowing a consistent and persistent measurable time to sync all UI experiences with. Oh, I found most implementations of HPET can either scale with speed or use the base frequencies (Also good for multi-core synchronizing)
It is amazing just how much useless information we can learn just trying to fix something.
The same thing about people learning an essays worth per fix attempt happens but more often to those whom bring us the various software we rely on for quite a few of the day to day things. Like fixing the browser so we can still enjoy the internet together (Especially for the less technically minded people) or making sure the OS doesn’t accidentally leak our passwords to an undefined memory space because of a corrupt audio file (Intentionally or otherwise, I’ve seen what happens when something tries to continue interpreting corrupt data… sometimes trying to look fault tolerant by brute force)
Javascript running in a browser was always supposed to be SAFE. Vulnerability often results when a ‘safe’ space actually exposes more access and power than a website’s client-side functions should reasonably need. So, this isn’t dumbing down, really… it’s plugging a security hole in what was supposed to be a safe container.
“Javascript running in a browser was always supposed to be SAFE.”
Sorry, but you need to change your assumption about Javascript and safety. The majority of exploits against firefox are based on javascript or in combination with javascript and DOM or SVG. That’s why chrome makes extensive use of sandboxing, if something breaks, they don’t want to have a last line of defense.
No, that was definitely a promise made when browsers and Javascript were becoming common. I agree that one can no longer blindly assume that, but that IS a fail. Safety has been traded for shiny things.
So Lord Vetinari’s clock solves the problem.
Except that the Spectre paper already takes degraded timers into account and suggests to use a Web Worker thread that increments a value in a loop as a replacement.
Yeah, I’m missing how this does anything at all. You can make the measurement noisier, but that just makes the attack slower – and Spectre is already so fast that you’d have to really screw with performance to make it so slow as to be useless. Even then, you could probably bury code in a game or something and expect that a few people’ll play for long enough to attack.
“makes the attack slower”….
That’s why they want to use this as a temporary fix — if the code can be slowed down,at all, it is better than nothing.
Did you see the throughput rates for Spectre/Meltdown? They’re huge: like, half a meg/sec or higher. You’d have to reduce it by a factor of 100 or so before you couldn’t hide it in a malicious game.
I would guess that the worker thread needs to share the “timer” value with the main thread using the SharedArrayBuffer. This could explain the reason for disabling SharedArrayBuffer.
But it’s very optimistic to assume that no equvalent mechanism can be devised :)
Yes, that’s correct. The SharedArrayBuffer was for communicating with the workers.
In no way does poor resolution thwart these attacks. I have a high resolution timer I’ve built entirely without using performance.now() or the DOMHighResTimeStamp type. Plus Spectre can use any microarchitecture side channel – not just cache timing.
So what other side channel do you propose to use?
Basically the best mitigation for such an attack is to take away the biggest attack surface on a “normal” computer.
(an android phone is not “normal”)
The attack surface is the javascript-engine, this is your diagnostic tool to probe the memory that the process resides.
– noscript is a start
– or the crowbar solution (javascript.enabled = false)
I know this is unwanted because many sites just simply don’t work without / (example: blogger.com)
but when there is no “real” patch availiable we just need to at least try to protect ourselves. Decreasing the timer resolution is like a try to cover a big fat butt with a small leaf.
Another program design approach would be to separate the browser components (example chrome isolation) into separate processes, password-manager for example needs to be run separate from the browser, to just give the browser knowledge about the password when needed to.
The cryptohandling needs to be isolated too, because an attacker can now read the negotiated keypair of an encrypted connection.
So it is with cookies, because now I could “track” a persons fully “ram-loaded” webhistory and cookies.
Perhaps it is needed to drill back javascript-engines in a way that the execution needs to be normalized to not let data out through this sidechannel.
Scripts get more and more uselessly essential, many sites now don’t even show pictures at all when scripts are blocked, or show pictures in insane low resolution while maintaining size like reuters for instance.
All this in the age of HTML5 where you really don’t even need to run ANY script to get a damn fancy site with lots of adaption to various circumstances.
I guess it’s all the simplified website making tools they use, that were written by assholes.
Yeah, without JS the internet is quite unusable today sadly. I suppose a lot of sites make JS mandatory because it allows better tracking and stuff like this. I hate it.
So disabling JS completly via about:config is not a solution. Noscript is better but sometimes a PITA if you have to allow temporarely a ton of domains and reload the page like 5 times just to be able to view a stupid video (and the new WebExtension-version of Noscript might be horrible, didn’t test it yet).
The problem is you are letting sites dictate your security, if everyone disables JS, guess what the sites that use it will have a massive falloff in traffic and will change and adapt.
Lead by example, I would say.
Yes, but you would have to wait for months or years (or maybe forever) for websites to adapt and in the meantime you wouldn’t be able to use the website (or to use it normally)… I don’t want to spend months without beeing able to use some websites.
(Even here on HaD, i have to enable JS for several domains to make my comment appear.)
> So disabling JS completly via about:config is not a solution.
It is. Works for me (yes, no NoScript, the real thing). My main Firefox profile is like that, then I have a secondary profile for the cases I really need it for — that gets used less than once a month. Oh, and no cookies either.
Luckily, hackaday works fine like that (even cookieless commenting: big kudos and thanks! That’s why I keep returning here). And LWN (I temporarily enable cookies to post), and more than 95% of the sites I care about.
As it turns out, I care less and less for the other 5%: so this number is actually shrinking.
So to Nay, upthread: leading by example here. Want to follow? :)
What about Youtube?
I fully agree the best solution for security is “javascript.enabled = false”
it would be nice if we could do away with it.
Lets go back to the original “browser as a document” instead of “browser as OS”.
“Decreasing the timer resolution is like a try to cover a big fat butt with a small leaf.”
that line has etched a REALLY BAD image on my mind!
I’ve never liked Java
Except we’re not speaking about Java, but Javascript …
As I said up-thread, it was promised from Day 1 that browsers would always execute client-side Javascript safely. That was central to its acceptance.
You don’t need microsecond timing on a freaking website – except maybe in graphics and sound, and such functionality could be wrapped and secured in an API. So think that browser makers deserve a bigger slice of blame for making their users so vulnerable. User safety needs to become important again.
Make user safety great again!!
scnr… (but you are right, nobody really needs µs-timers and stuff to display a website)
However he is right as he is wrong :)
if the java-applet technologie still works, java is also an attack vector
So….just run 7 tabs of java content to slow everything down lol.
Can we instead perhaps go back to JS engines that do not JIT? JS is not supposed to cause a high CPU load. There should be a standard for browsers to monitor CPU load and throttle performance, and for scripts to request JIT or more CPU time, just like now they can ask for camera access or to display notifications. In the end, front-end developers must be forced to develop with performance in mind, not just blindly throw in random frameworks they found on GitHub.
Don’t know why but Microsoft’s site runs some scripts that lock down my browser and makes it only occasionally respond very slowly.
I had trouble getting the damn meltdown update because they make my browser melt down.
Probably mining cryptocurrency eh…