The fun part of security audits is that everybody knows that they’re a good thing, and also that they’re rarely performed prior to another range of products being shoved into the market. This would definitely seem to be the case with fingerprint sensors as found on a range of laptops that are advertised as being compatible with Windows Hello. It all began when Microsoft’s Offensive Research and Security Engineering (MORSE) asked the friendly people over at Blackwing Intelligence to take a poke at a few of these laptops, only for them to subsequently blow gaping holes in the security of the three laptops they examined.
In the article by [Jesse D’Aguanno] and [Timo Teräs] the basic system and steps they took to defeat it are described. The primary components are the fingerprint sensor and Microsoft’s Secure Device Connection Protocol (SDCP), with the latter tasked with securing the (USB) connection between the sensor and the host. Theoretically the sensitive fingerprint-related data stays on the sensor with all matching performed there (Match on Chip, MoC) as required by the Windows Hello standard, and SDCP keeping prying eyes at bay.
Interestingly, the three laptops examined (Dell Inspiron 15, Lenovo ThinkPad T14 and Microsoft Surface Pro X) all featured different sensor brands (Goodix, Synaptics and ELAN), with different security implementations. The first used an MoC with SDCP, but security was much weaker under Linux, which allowed for a fake user to be enrolled. The Synaptics implementation used a secure TLS connection that used part of the information on the laptop’s model sticker as the key, and the ELAN version didn’t even bother with security but responded merrily to basic USB queries.
To say that this is a humiliating result for these companies is an understatement, and demonstrates that nobody in his right mind should use fingerprint- or similar scanners like this for access to personal or business information.
What happened to Hackaday Links this week? Taking a week off?
I think he took it off last year too.
I guess ‘Offensive Research’ doesn’t parse quite the same in USA-land.
“The fun part of security audits is that everybody knows that they’re a good thing…”
Not in my experience.
I’d say 60% of the time I’ve encountered hostility and another 35% of the time people literally refuse to believe such a thing exists outside the movies/tv.
I’ve been detained by police for “confessing to a crime” when I’ve stopped by to let them know I was hired to audit physical security, and to provide a copy of the signed permission letter. (Getting caught and handed over to the police would be a good thing. And having prearranged paperwork at the station helps smooth that interaction out.)
Didn’t you know, if everyone stopped talking about problems, all the problems would go away!
Now it makes good sense why Apple had the T1/2 chips for Touch ID on Intel Macs. I don’t think anyone has broken them yet…
Exactly. I wonder if the researchers even tried testing an Apple TouchID device. :-)
Hat tip to Microsloth for successfully whiffing their own security audit.
I’m gonna guess you didn’t read the linked article where they explain that it’s a hardware MITM attack combined with fingerprint readers and manufacturer supplied drivers that have terrible security and an almost complete lack of security in the communication protocol between host and reader under Linux.
Still, MS bad ‘mmkay?
IMHO, kudos to MS for doing this and releasing the details which clearly prove the manufacturers of the readers and laptops are the ones to blame here.
Except that the worst performer of the hardware manufacturers is Microsoft! Any kudos they get for commisioning the research is completley nullified by failling to implement their own security measures.
Hopefully one day all computers will be sold free of that cursed legacy OS.
They didn’t manufacture the fingerprint sensor.
No, but they are the OEM in one of the cases, so I would hold them ultimately responsible for ensuring a secure implementation.
“To say that this is a humiliating result for these companies is an understatement, and demonstrates that nobody in his right mind should use fingerprint- or similar scanners like this for access to personal or business information.”
Convenience. The rest is locked behind a gesture lock that resembles a Konami code entered in 25 seconds else the phone blows up. No pressure.
But seriously NFC security key.
Remember than any biometric authentication is authentication with a plain view password.
The only difference with the password sticker under the keyboard is that it is a complex password.
But still, use a long cryptic badly written cursive password on the sticker and you get almost the same security strength.
Last I checked in the US, jury is still out on whether or not one can be compelled to unlock a biometric lock purely because of this concept. IIRC this is why phones for example cannot be unlocked with fingerprint on initial boot, it must be the passcode that unlocks it.
> IIRC this is why phones for example cannot be unlocked with fingerprint on initial boot,
> it must be the passcode that unlocks it.
Thanks, that rationale had never occurred to me.
You can be compelled to provide access to a devise. You can not be compelled to disclose a password. You can “forget” a password, But you WILL be compelled to use your biometrics to unlock devices.
You also don’t need to be alive to provide biometrics. Think about that for a second.
Remember also that keyboard connection itself is unsafe, so tucking a simple keylogger inside your laptop is still an easy task.
ie: if you laptop is unsecured, it’s unsecure.
Yup, if a ‘bad actor’ has physical access to a machine then you can assume it’s been compromised.
Really secure machines are signed out, signed in and then locked away at the end of the working day.
The difference here is that if I confiscate your laptop a keylogger won’t help me, but if a fingerprint reader holds the password and it’s poorly secured that’s jackpot! See?
I’ve got two instances of Windows 10 installed on my TPM-enabled laptop. Boot one windows, register fingerprints and it works fine. Boot the other and the registered fingerprints get promptly deleted. Why? To prevent someone from booting a second OS, registering their own fingerprint and then booting the first OS with it. Clever protection, isn’t it? The only question is why in the bleeping heck would the second OS be allowed to change the fingerprints without having to authenticate in some way (i.e. providing a valid matching fingerprint)? The design doesn’t seem too thoroughly thought out.
> (i.e. providing a valid matching fingerprint)?
For THIS one it’s easy: to allow to use a machine after user’s fingerprint is altered/destroyed (like, accident during opening a can, happened to me).
The fingerprint data and verification stays in the sensor? But that’s a guarantee of a man in the middle attack working. Just like when you have a door access system with the verification done in the pad on the outside…
I guess they did that so that fingerprints don’t get transmitted to the operating system and then potentially leaked… it is difficult to change your fingerprints if they’ve been leaked online.
Seems strange to complain about an obvious inherent security flaw though. I bet every engineer involved flagged this up the chain.
Because if you do the auth in the os then you need a plaintext password in the disk! That design is not the problem. The password needs to be in encrypted in the reader and only accessible after auth. On the other side the reader and os need to be cryptographically paired to avoid leaks and spoofs.
Public key cryptography solves that problem. As long as the private key in the device stays private, a mitm can’t impersonate it.
Direct physical access to the hardware has always been a trump card. And biometric in my mind has always been about convenience vs maximum security against motivated adversaries who don’t mind chopping off your thumb if necessary. For one thing, if it’s fingerprints you can only have 10 of them, and if they’re all compromised you’re just out for life. These are sad mistakes, but if I had to choose which mistake to make I’d make the one where the biometric data is safe but the computer is unsafe, rather than vice versa. If you don’t have the really deathly critical stuff protected by something other than your windows login… why not? It’s probably also important enough to reside somewhere other than your laptop’s disk anyway.
I’d rather someone with physical access get into the device rather than get my biometric data. Still bad, but if you have anything critical enough that long term physical access can’t be allowed to get into it, maybe it’s also critical enough not to store it on one single laptop and instead to put it behind a separate layer of protection and accessible in multiple places for safety against loss.
Oh, hey, my original comment finally loaded. XD
Interesting read actually.
Why doesn’t it surprise me that Microsoft would create this framework, then have the worst implementation of it?!
The article also reinforces the importance of physical security.
I used to have so many customers forget their passwords that I started to carry a custom USB key that would boot a modified version of the chntpasswd tool that would boot and automatically erase the admin password from the system HD. Plug. Boot. Beep. Beep Beep. “Please remove USB key and press any key”. Beep. Reboot. Login as admin with no password.
Occasionally I used it to convince customers of the importance of physical security that pooh-pooed the threat as being overblown. It’s an especially effective demo of you can hide the usb stick from them until after you’ve logged back in as they only see you hit one key and you’re in.
Showing them the usb key AFTER you’ve gotten in has, for some psychological reason, more impact.
Anyway, this article only reinforces the old IT Security saying “If I can touch it, I can own it”