[Sam Greydanus] created a neural network that can encode and decode messages just as Enigma did. For those who don’t know, the Enigma machine was most famously used by the Germans during World War II to encrypt and decrypt messages. Give the neural network some encrypted text, called the ciphertext, along with the three-letter key that was used to encrypt the text, and the network predicts what the original text, or plaintext, was with around 96-97% accuracy.
The type of neural network he used was a Long Short Term Memory (LSTM ) network, a type of Recurrent Neural Network (RNN) that we talked about in our article covering many of the different types of neural networks developed over the years. RNNs are Turing-complete, meaning they can approximate any function. [Sam] noticed the irony in this, namely that Alan Turing both came up with the concept of Turing-completeness as well as played a big part in breaking the Enigma used in World War II.
How did [Sam] do it?