There’s not one, but two side-channel attacks to talk about this week. Up first is Pacman, a bypass for ARM’s Pointer Authentication Code. PAC is a protection built into certain ARM Processors, where a cryptographic hash value must be set correctly when pointers are updated. If the hash is not set correctly, the program simply crashes. The idea is that most exploits use pointer manipulation to achieve code execution, and correctly setting the PAC requires an explicit instruction call. The PAC is actually indicated in the unused bits of the pointer itself. The AArch64 architecture uses 64-bit values for addressing, but the address space is much less than 64-bit, usually 53 bits or less. This leaves 11 bits for the PAC value. Keep in mind that the application doesn’t hold the keys and doesn’t calculate this value. 11 bits may not seem like enough to make this secure, but keep in mind that every failed attempt crashes the program, and every application restart regenerate the keys.
What Pacman introduces is an oracle, which is a method to gain insight on data the attacker shouldn’t be able to see. In this case, the oracle works via speculation attacks, very similar to Meltdown and Spectre. The key is to attempt a protected pointer dereference speculatively, and to then observe the change in system state as a result. What you may notice is that this requires an attack to already be running code on the target system, in order to run the PAC oracle technique. Pacman is not a Remote Code Execution flaw, nor is it useful in gaining RCE.
One more important note is that an application has to have PAC support compiled in, in order to benefit from this protection. The platform that has made wide use of PAC is MacOS, as it’s a feature baked in to their M1 processor. The attack chain would likely start with a remote execution bug in an application missing PAC support. Once a foothold is established in uprivileged userspace, Pacman would be used as part of an exploit against the kernel. See the PDF paper for all the details.
Hertzbleed
The other side-channel technique is a new take on an old idea. Hertzbleed is based on the idea that it’s possible to detect the difference between a CPU running at base frequency, and that CPU running at a boost frequency. The difference between those two states can actually leak some information about what the CPU is doing. There’s a pre-release PDF of their paper to check out for the details. The biggest result is that the standard safeguard against timing attacks, constant-time programming, is not always a reliable security measure.
It works because max frequency is dependent on the processor Thermal Design Power (TDP), the maximum amount of power a CPU is designed to use and amount of heat to dissipate. Different instructions will actually use different amounts of power and generate more or less heat based on this. More heat means earlier throttling. And throttling can be detected in response times. The details of this are quite fascinating. Did you know that even running the same instructions, with different register values, results in slightly different power draw? They picked a single cryptographic algorithm, SIKE, a quantum-safe key exchange technique, and attempted to extract a server’s secret key through timing attacks.
There is a quirk in SIKE, also discovered and disclosed in this research, that it’s possible to short-circuit part of the algorithm, such that a series of internal, intermediary steps result in a value of zero. If you know multiple consecutive bits of the static key, it’s possible to construct a challenge that hits this quirk. By extension, you can take a guess at the next unknown bit, and it will only fall into the quirk if you guessed correctly. SIKE uses constant-time programming, so this odd behavior shouldn’t matter. And here the Hertzbleed observation factors in. The SIKE algorithm consumes less power when doing a run containing this cascading-zero behavior. Consuming less power means that the processor can stay at full boost clocks for longer, which means that the key exchange completes slightly more quickly. Enough so, that it can be detected even over a network connection. They tested against Cloudflare’s CIRCL library, and Microsoft’s PQCrypto-SIDH, and were able to recover secret keys from both implementations, in 36 and 89 hours respectively.
There is a mitigation against this particular flaw, where it’s possible to detect a challenge value that could trigger the cascading zeros, and block that value before any processing happens. It will be interesting to see if quirks in other algorithms can be discovered and weaponized using this same technique. Unfortunately, on the processor side, the only real mitigation is to disable boost clocks altogether, which has a significant negative effect on processor performance.
Defeating Nest Secure Boot
[Frédéric Basse] has a Google Nest Hub, and he really wanted to run his own Linux distro on it. There’s a problem, though. The Nest uses secure boot, and there’s no official way to unlock the bootloader. Since when would a dedicated hacker let that stop him? The first step was finding a UART interface, hidden away on some unterminated channels of a ribbon cable. A custom breakout board later, and he had a U-Boot log. Next was to run through the bootup button combinations, and see what U-Boot tried to do with each. One of those combinations allows booting from a recovery.img, which would be ideal, if not for secure boot.
The great thing about U-Boot is that it’s Open Source under the GPL, which means that the source code should be available for perusal. Find a bug in that source, and you have your secure boot bypass. Open Source also allows some fun approaches, like running portions of the U-Boot code in userspace, and exercising it with a fuzzer. That’s the approach that found a bug, where a block size greater than 512 bytes triggers a buffer overflow. It’s a generally safe assumption, as there aren’t really any USB storage devices with a block size greater than 512.
Never fear, a device like the Raspberry Pi Pico can run TinyUSB, which allows emulating a USB device with whatever block size you specify. A test determined that this approach did result in a repeatable crash on the real device. The code execution is fairly straightforward, writing a bunch of instructions that are essentially noop
codes pointing to a payload, and then overwriting the return pointer. Code execution in the can, all that remained was to overwrite the command list and execute a custom U-Boot script. A thing of beauty.
PING
The lowly ping
command. How much can a single pair of packets tell us about a network and remote host? According to [HD Moore], quite a bit. For example, take the time given for a ping response, and calculate a distance based on 186 miles per millisecond. That’s the absolute maximum distance away that host is, though a quarter and half of that amount are reasonable lower and upper limits for a distance estimate. TTL very likely started at 64, 128, or 255, and you can take a really good guess at the hops encountered along the way. Oh, and if that response started at 64, it’s likely a Linux machine, 128 for Windows, and 255 usually indicates a BSD-derived OS.
Receiving a “destination host unreachable” message is interesting in itself, and tells you about the router that should be able to reach the given IP. Then there’s the broadcast IP, which sends the message to every IP in the subnet. Using something like Wireshark for packet capture is enlightening here. The command itself may only show one response, even though multiple devices may have responded. Each of those responses have a MAC address that has can be looked up to figure out the vendor. Another interesting trick is to spoof the source IP address of a ping packet, using a machine you control with a public IP address. Ping every device on the network, and many of them will send the response via their default gateway. You might find an Internet connection or VPN that isn’t supposed to be there. Who knew you could learn so much from the humble ping
.
Bits and Bytes
Internet Explorer is Really, Truly, Dead. If you were under the impression, as I was, that Internet Explorer was retired years ago, then it may come as a surprise to know that it was finally done in only this past week. This month’s patch Tuesday was the last day IE was officially supported, and from now on it’s totally unsupported, and is slated to eventually be automatically uninstalled from Windows 10 machines. Also coming in this month’s patch drop was finally the fix for Follina, as well as a few other important fixes.
There’s a new record for HTTPS DDOS attacks, set last week: Cloudflare mitigated an attack consisting of 26 million requests per second. HTTPS attacks are a one-two punch consisting of both raw data saturation, as well as server resource exhaustion. The attack came from a botnet of VMs and servers, with the largest slice coming from Indonesia.
Running the free tier of Travis CI? Did you know that your logs are accessible to the whole world via a Travis API call? And on top of that, the whole history of runs since 2013 seems to be available. It might be time to go revoke some access keys. Travis makes an attempt to censor access tokens, but quite a few of them make it through the sieve anyways.
Ever wonder what the risk matrix looks like for TPM key sniffing on boot? It’s not pretty. Researchers at Secura looked at six popular encryption and secure boot applications, and none of them used the parameter encryption features that would encrypt keys on the wire. The ironic conclusion? discrete TPM chips are less secure than those built in to the motherboard’s firmware.
“… and is slated to eventually be automatically uninstalled from Windows 10 machines.” … While a couple of buggy spyware get installed at the same tiime.
Really, stopping support of it is ok, but uninstalling things from people machines are not.
If you’re surprised by software suddenly vanishing or appearing on your Windows device, I don’t know what to tell you. They’ve been doing this sort of thing since Win7. It is bad PR when a bunch of people running known-vulnerable software get exploited. They’ll blame Microsoft for not fixing the vulnerability (i.e. removing IE) and leave MS trying to put out the fires. From a company PoV, it makes more sense to remove IE and deal with the relatively small PR hit from people technical enough to care about it but not technical enough to quit using MS products.
If you’re okay with that, cool. If you’re going to complain about it, don’t. Just stop using Windows. Linux is pretty user-friendly these days so there’s very little reason for anyone to use Windows unless they’re stuck to closed-source software that doesn’t support Linux and they can’t run it in WINE or a VM.
Understood. However, by simply notifying that they are ending support for software is more than enough, or least ask the user if they want the software removed…If not and they are worried about the PR or implications of the old software, then how do they handle cases where there are some using old software than XP, Win7, etc? Does it give them the right to uninstall or cripple this old software… that someone bought and paid for? MS is a big boy and can handle PR, but to unilaterally make decisions because they “know” better than you is simply unacceptable.
To your point about simply abandoning MS… If one could, one would. However, there are a lot of applications that are not created for Linux and/or cannot be run in WINE and/or are crippling slow in a VM. Overall it is not as simple as you suggest… at least not currently. In the future it may be different, but it is not as easy as you suggest… at least not for everyone.
> To your point about simply abandoning MS… If one could, one would.
Very true, although to a point. I observed that in 2022 a huge load of users still aren’t aware of alternatives. Some of them could be shown those alternatives and be given enough knowledge to make a choice, but for others is too late. the Microsoft ecosystem is carefully thought with corporate profit in mind, which of course translates into tying users to MS applications that talk only to other MS applications in a way that leaves out any chances of migration to alternatives. Once a business is built around Microsoft (or Apple for instance, the magic word is “proprietary+closed”) the migration to a free ecosystem usually becomes painful, short of ditching everything then restarting from scratch.
Open Source should be embraced by schools, so that kids can develop their idea of what the FOSS ecosystem offers well before they’re exposed to proprietary environments at work.
>>> Open Source should be embraced by schools, so that kids can develop their idea of what the FOSS ecosystem offers well before they’re exposed to proprietary environments at work
You’re right, it should. But schools are offered cheap licences and free training for closed source products, and they are grateful for it. As for the students, they grow up knowing inly microsoft, Adobe, and Autocad. And if they go to a rich school, Apple stuff too.
It’s a real shame. Where I worked, we weren’t even allowed to *use* open source software unless it was on a special whitelist. So we weren’t allowed to use Inkscaoe instead of Illustrator, Freecad instead of Rhino etc etc etc.
Not surprised, because the kind of people MS hires can´t be considered good company.
But simply and silently removing software from someone´s computer is wrong. I decide about my configuration or the software I want to have. If they want to inform me it is not secure, fine, that is nice and right. But the decision on using it and having it in the computer I paid for is mine.
Also, ok, it is possible to change to Linux. Where do I download this “Linux” ? There are some 248 different distribuition. Let´s just filter the best ones. Remove those that use systemd. Remove those that do not support some hardware people use normally. And so on. And so on.
When evaluating software for a couple of machines for work, linux could be used, but was put aside because of too much opinionated configurations. It doesn´t help if some obscure configuration can be done at the terminal. One doesn´t want ( and sometimes cannot have ) to keep a list of post-install obscure configurations to do.
One example ? Date and time of the computer should be those set in BIOS. Maybe some newer distro changes it, but on the Ubuntu-s 12-18 versions, it would insist on using its own way. And we would need to remember to change some option through terminal, instead of having some simple option in the control panel.
Other ? In a updated Devuan installation, I cannot open graphical applications from the terminal when running as root. Even if they need to be run as root ( running as sudo they will not save their settings ) . Already tried the lot of configurations found in SO. Sometimes it work for a couple times, then stop working altogether.
More ? too much applications to do something, but each with some shortcoming.
Basically, it not that one could not change to linux. It is that it would involve losing a lot, and spending too much time fixing and configuring the computer, instead of *using* the computer.
If you are coming from being a Windon’t admin/user you probably want a systemd distro, less culture shock I would suggest. Can’t say I am a fan, but at the same time it does function rather well now, and is more like the M$ way of doing things IMO.
And for almost everyone, these days even including gamers you could install almost any Linux distro you liked and get a system that just works (ideally stick to the package manager provided for all your installs), its only those rare folks that need something esoteric in some way that should actually need to understand anything of how to administer a Linux system, or figure out how to make application x play nicely with Wine – the same as they would for an odd setup in M$ land, the one caveat being that the hardware/software that creates this odd setup likely has better manufacturer support for Windows (for a while anyway).
Obviously there WILL be some learning curve, there is a learning curve to any change, even a really minor one. SO if you spent the last several decades grumbling at learning the new hoops M$ created for you to jump through with each new version, but mostly able to just use your existing knowledge this will feel rather mountainous, as its not Windoze at all!
When 90% of your existing knowledge on administering a system is no longer of any use at all, even if its a simpler new system to learn that is a great deal to catch up. And i’m not convinced Linux is simpler – its certainly more configurable and controllable, to get the setup you actually want, and having not touched a native windoze computer with any regularity in at least 10 years I know I can’t remember anything of how to make Windoze work, where Linux now feels pretty comfortable and familiar – plus I can actually find useful documentation to figure out what is happening when something does need tinkering…
“From a company PoV, it makes more sense to remove IE and deal with the relatively small PR hit from people technical enough to care about it but not technical enough to quit using MS products.”
If they’re not technical enough to quit MS, then they’re not technical enough to use Linux.
Well the other option is that Microsoft leave it there fully integrated into the OS and any future malware will have very easy access for owning your computer and add it to one or more remote botnets.
I guess Microsoft is removing it to remove any legal liability they might have for gross incompetence. It is a bit like having a steel door in wall of your house that 6 months to a year later decays into a sheet of cardboard held in place with one thumb tack, some sellotape and a bit of string. Microsoft’s options are to remove it now, and brick up that hole up, hence removing any future legal liability, or leave it there as is knowing full well the eventual consequences.
“I guess Microsoft is removing it to remove any legal liability they might have for gross incompetence.”
What liability? Eulas and disclaimers. Even open-source has those, and any attempts at making programmers responsible parties falls on deaf ears (but we’re not engineers…).
“automatically uninstalled from Windows 10” … That is just another reason to not load Windows on your machines sounds like to me. Should be up to the user to keep/not keep software or load software on ‘their’ machines. To me, this means M$ is treating IE as part of their OS (wasn’t there a lawsuit a few years back on just that premise?) , not as an application installed on the OS….. Sad. That sounds like you could buy (for those that do) Office for Windows machines and one day they find it ‘gone’ because the old version isn’t supported. Or uninstall Win10 as no longer supported…. Not a good precedent going forward. Glad I am Windows free at home and simply use Firefox for my web browsing.
Some of us are scrambling at work, as some vendor web applications won’t run on Edge/Chromium/Firefox. Our IT department (Windows centric) will be installing ‘patches’ that scrub IE from our desktops next week….. Yikes.
I think for most people, Windows comes preloaded.
Don’t leave a sentence dangling like that!
“Windows comes preloaded with vulnerabilities!
B^)
Just about every OS preloaded with vulnerabilities? Unless it updates while installing?
Major freak out over nothing here.
If you’ve got something that relies on IE, then Edge has IE mode, and that’s supported until 2029 at the earliest.
And yes. IE is effectively part of the OS. That’s pretty obvious. It’s heavily tied to the OS version and always has been (or at least has for a very long time). And parts of the OS relied on it.
“And yes. IE is effectively part of the OS.”
That was Microsoft’s argument during the antitrust case. The woes of being a monopoly.
Hetzbleed > typo in the title of the article
>For example, take the time given for a ping response, and calculate a distance based on 186 miles per millisecond. That’s >the absolute maximum distance away that host is, though a quarter and half of that amount are reasonable lower and upper >limits for a distance estimate. TTL very likely started at 64, 128, or 255, and you can take a really good guess at the hops >encountered along the way.
I think you have it backwards, the defines the minimum distance, The maximum can be a lot more.
No, you have it backwards. The distance is limited by that of a photon traveling in a vacuum for (half of?) the received ping time. It could have travelled a shorter distance, while experiencing delays that add up to the ping time, but it couldn’t have travelled a longer distance without breaking the laws of physics.
The worst part is that I sat and thought about that for a minute, trying to make sure I got it right.
*sits and thinks again* No, Speed of light is the speed limit. That defines the maximum possible distance. The packets actually travel slower than lightspeed, so it’s actually 50%-75% closer. I still think I got it right the first time.
But light moves through glass (the inner core of fibre optic cables are mostly glass, with a refractive index of around 1.5) about 50% slower compared to air (refractive index 1.000293) or vacuum (refractive index 1).
So unless the back haul links are all microwave or other parts of the RF spectrum then the maximum distance is higher by about 50% than it should be, if you ASSume that most long distance links are going to be through fibre optic cables.
Couldn’t you turn boost clocks on/off in some cryptographically random way, while still getting close to the same performance as leaving them on, thus masking the actual timing?
Wait, so everybody is wetting the bed so-to-speak and not using TPM 2 parameter encryption? The ESAPI spec’s been around for years and people should have picked up on it. It’s not that discrete TPMs are that much less secure, this is a core feature that nobody is using because I guess they’re still targeting TPM 1.2 feature equivalence? Still, fTPM is probably a tiny bit more secure because the commands themselves become inaccessible, instead of just the secure parameters.
Remember people here see TPM as a means for the the man to control them. DRM, and “I’ll run whatever OS I want”.