A large chunk of the global economy now rests on public key cryptography. We generally agree that with long enough keys, it is infeasible to crack things encoded that way. Until such time as it isn’t, that is. Researchers published a paper a few years ago where they cracked a large number of keys in a very short amount of time. It doesn’t work on any key, as you’ll see in a bit, but here’s the interesting part: they used an undescribed algorithm to crack the codes in a very short amount of time on a single-core computer. This piqued [William Kuszmaul’s] interest and he found some follow up papers that revealed the algorithms in question. You can read his analysis, and decide for yourself how badly this compromises common algorithms.
The basis for public key cryptography is that you multiply two large prime numbers to form a product and post it publicly. Because it is computationally difficult to find prime factors of large numbers, this is reasonably secure because it is difficult to find those prime numbers that are selected randomly.
However, the random selection leads to an unusual attack. Public keys, by their very nature, are available all over the Internet. Most of them were generated with the same algorithm and random number generation isn’t actually totally random. That means some keys share prime factors and finding a common factor between two numbers isn’t nearly as difficult.
In fact, that’s the heart of the problem. Factoring a 232-digit number took about 1,500 years of computing time. But finding common factors between two numbers is much easier. However, the original research paper mentioned they found common factors for 36 trillion pairs of keys in a matter of hours. That was faster than [William] expected.
Before you get too alarmed, the researches looked at 6.2 million keys and were able to crack fewer than 13,000. Not exactly a gaping security hole, unless you are one of the 13,000, of course. However, the entire post highlights two things: first, speeding up algorithms is usually more efficient than making faster computers, and second, you never know when your carefully crafted encryption is going to be rendered worthless by a better algorithm.
It is widely thought that quantum computing will significantly change what you’ll have to do to be secure. We’ve seen before where an implementation of an algorithm is a weak point.
“computersm”
Is that the part when you whip the chips? :P
You gotta bond them first.
(Yah, silly, but I just couldn’t resist)
Does this mean you have a high probability of cracking any RSA key if you are able to cause either end re-generate their private random prime ? Or is it part of the spec to only ever do that once ?
Once per key pair, but not once ever. A system could be used to generate countless key pairs, each needing their own set of random primes. But yes both random primes must be kept secret as that is what the private key is derived from.
As nearly all common computers work with pseudo random numbers, usually lacking hardware to generate true randomness, the real trial by fire is how close (or not) to random those pseudo random numbers are.
If a given system tends to favor certain numbers over others when generating pseudo random, you can begin with the more favored numbers at the top of the list when brute forcing, and of course have a better chance of guessing right quicker.
As a simple example, take the number 10,399,403,993
What two 6-digit prime numbers did I multiply together to get that? It would take you a while to try all of them, as there’s something like 600 6-digit primes to multiply together to check against that value.
But say you knew my random number generator picked 100001 far more often than it should. Now you can just divide those, verify the result is prime, and now you have my private key in a single math operation instead of hundreds of thousands of them.
The factors of 10,399,403,993 are 11, 9091, and 103993.
Yup, and as the number of digits are known (6-digit in this case), that last one is correct.
Of course I used tiny numbers just to help keep the scale down and understandable, but the effect is the same.
With an RSA key you tend to find the sizes are 1024, 2048, 4096, or 8192 bit sized numbers.
You can use nearly any size of course but the larger the numbers the longer it takes to work with them. It’s a pretty big time difference on a 40mhz processor, but of course as tech marches on it takes far less time with our current processors, and in the near future will be seconds.
That’s why RSA had to be retired a few years back. That’s roughly the point it was fast enough to brute force to be unacceptable.
Not to mention we now have pre-computed lists of prime numbers instantly searchable. and plenty of optimization attacks like in the article here.
Just like with safes, no one ever claims encryption is “never crackable”, safes specify the time it takes to cut through them with a blowtorch, while encryption algorithms specify how long moores law figures it would be brute forcible in a persons life time, but rarely can take into account optimization tricks until after they are discovered. (Though 6-digit numbers were never actually acceptable :P )
What?
To generate an RSA-4096 key, you need to find two primes in the range of 2^2048. There are about 2^2037 primes in that range. The Prime Number Theorem says that there are roughly N/ln(N) primes less than N, for 2^2048, that’s (2^2048)/(ln 2^2048) = (2^2048)/(2^11 ln 2) = (2^2037)/ln 2. RSA can’t use all of those primes, but it can use a large fraction of them. can only use 1 in a million, that still leaves more than 2^2000 primes suitable for RSA-4096.
Roughly speaking, that’s about a googol primes per fundamental particle (electrons, quarks, neutrinos, and photons) in the observable universe.
You can’t brute-force that. If you want to break it, you have to use other methods. You can’t pre-compute that list.
In this case, they are looking at pairs of keys, pq and rs, and looking for common factors — that is, when p=r, p=s, q=r, or q=s. There are enough primes in the suitable range that there shouldn’t be easily-found common factors, if the primes are chosen with a halfway decent random method.
That they found so many pairs so quickly implies that the primes aren’t being chosen with a halfway decent random method. Some primes are showing up more often than they should (like, more than once). That means the tools generating those keys aren’t using a halfway decent random method for selecting primes. One of the comments on the linked article suggests that the original paper hinted that the cracked keys were from embedded devices which generated a key on initial boot, thus having little or no entropy to generate good random numbers.
I made a sieve that can probably solve that pretty quickly. It just has a bit of a problem with large numbers. However, the algorithm is sound. I do need to remake it to get rid of all of the castings and tighten it up but it was part of a college assignment with certain constraints.
https://hackaday.io/project/8137-prime-factor-reduction-sieve
“The basis for public key cryptography is that you multiply two large prime numbers to form a product and post it publicly.”
Well, that’s the basis for RSA. RSA is just one of many possible public key systems. It’s also been deprecated for a while, for example NIST removed it from Suite B about 3-4 years ago, IIRC. The reasons given were that it didn’t scale very well (in terms of the size of keys needed to cope with the expected growth in cracking power) and isn’t quantum resistant (which may matter at some future date).
Key exchange based on elliptic curves is still recommended for some applications though.
BTW, NIST is running a crypto competition for post-quantum key exchange and digital signature algorithms.
Wikipedia: https://en.wikipedia.org/wiki/Post-Quantum_Cryptography_Standardization
Expect one or more of these to start hitting your favourite network stack / operating system in a couple of years.
… but even these will fail if the random number source isn’t any good.
>… but even these will fail if the random number source isn’t any good.
you probably are using intel, then. intel is not very good, as of late, in most things they try.
MY random number sources are quite ok.
I do worry about the embedded systems (and even servers) that lack a hardware entropy source and will generate key pairs at startup, well before enough entropy has been gathered to make good keys. They are then stuck with a possibly easy-to-guess key pair until the next boot.
Last I heard AES256 was not crackable by a quantum computer in this universe. Has this been revised?
Maybe in the next universe over?
*gasp*; so /that’s/ how it works!
There are no indications a quantum computer could crack it faster than a conventional however AES* isn’t a public key algorithm.
(* Rijndael is a much cooler name IMHO)
I thought that quantum computing gave a quadratic speedup in this sort of application. As in, it could check 2^256 keys in 2^128 steps. Given that quantum computers are small and slow compared to classical computers these days, those 2^128 steps still means “practically secure”.
@blaisepascal2014
Quantum algorithms and p vs np come into play here. I’ll give my poor explanation: some math doesn’t find solutions faster the more horsepower you throw at it.
For symmetric, generally you double the number of bits to get equivalent key strength under the assumption of a quantum attacks (using presently known means).
For asymmetric, things are much worse, hence the panic in the streets.
AES256 is just a block cipher. To be useful for actual communications, a block cipher needs to be run in a “mode”. Some of the AEAD modes (that are meant to provide authentication as well as privacy) such as GCM are looking pretty weak.
https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
https://en.wikipedia.org/wiki/Galois/Counter_Mode
Well, these days there are many ways of recovering AES keys using bicyclique attacks and other advabced attacks and these attacks are quite successful and faster than bruteforcing an AES enciphered text
While the link goes to Kuszmaul’s current blog, that entry is clearly based on two earlier entries on his previous blog. Part 2 has an alternate explanation of the algorithm with tree graphs:
https://mathandmaking.wordpress.com/2015/12/15/breaking-an-unbreakable-code-part-1-the-hack/
https://mathandmaking.wordpress.com/2015/12/21/breaking-an-unbreakable-code-part-2-the-algorithm/
Mores law is supposed to plateau due to limitations in neighboring molecular resistance and tolerance where the size of a transistor misfires to connect random gates. We are pushing 10nm MOSFETs now; at lower then 2nm nano meters and smaller is the electrostatic effects of DNA codeons, an acid-base chemical cellular operation.
Theoretically we could continue Moore’s law by increasing die size, just use wafer scale designs and increase the wafer size.
Stepping way back from the algorithm, it seems that the method is good, but the systemic failure is in people using the same keys over and over again from the same sources, and having them circulating so that pairs can be found.
Can we hear some applause for passwords “login”, “admin”, and “password”?
The systemic failure here isn’t re-use of keys, it’s the use of a low-quality RNG to create keys. The same bad RNG distributed across a zillion keys creates a weakness (shared factors) that makes other keys potentially breakable.
It’s too bad there’s not likely an analysis of what OS produced these weaker keys. While I would be prone to suggest it was Windows, there was a problem found with Linux RNGs a few years ago that could apply just as easily.
The comments to the linked article suggest that the original paper hinted that the weak keys were mostly from embedded systems that generated RSA key pairs on initial boot. They don’t have a lot of initial entropy.
If you have an pseudo-random-number-generator with only 10 bits of entropy in its initial state, you are going to get about 1,000 different “random” sequences coming out of it. If that PRNG is used to generate a key pair, that’s only about 1,000 different key pairs.
Reminds me of the refrain from “Secrets from the Future” by MC Frontalot
Slightly off topic,
Since generating random numbers is hard to do, could it be possible to build a really slick random number generator using some fancy and semi-expensive process (i.e. more than what most people would be willing to pay for their very own RNG machine) and create a random number server. Sort of like how we have time servers. Just ask for a ransom number of a smallish size for free and pay a really small fee for a bigger and better random number. Of course, it should ensure that no one is likely to get the same big number as anyone else.
I have no idea what the drawback of such a thing would be.
trust
The drawback is that you have so many RNG servers to choose from now, getting traction on a new one is difficult. Wikipedia provides a non-exhaustive list at https://en.wikipedia.org/wiki/List_of_random_number_generators#Random_number_servers
In reality, most modern operating systems can provide pretty good random number services themselves.
Linux provides two random “devices”, /dev/random and /dev/urandom. Both are based off the same kernel-level random number generator. That generator collects entropy from random system events — keyboard and disk timing, network jitter, hardware RNG sources, etc — and manages it in a pool. A keystroke might add a few bits of entropy to the pool, reading a byte from the random devices will remove 8 bits from the pool, etc. The difference between the two is that /dev/random will block if the system believes there is too little entropy in the pool, while /dev/urandom won’t.
I believe that Windows and OS X provide similar services, with similar guarantees.
The “exploit” here is not because of a bad PRNG, but because the keys are being generated by identical PRNGs with too little starting entropy and poorly-designed algorithms to use that data to find primes.