Google AI: No one knows how it works

Google AI: No one knows how it works

【51CTO.com Quick Translation】Google Brain has created two AIs that can develop their own encryption algorithms to protect messages from the prying eyes of a third AI - which will also evolve to try to crack the encryption mechanisms generated by the AIs. The results have been very successful: the first two have successfully learned how to communicate with each other in a secure way.

[[183999]]

The encryption system is set up in this way. P = input plain text, K = shared key, C = encrypted text, and PEve and PBob are the plain text output results after calculation.

The Google Brain team (based in Mountain View, as distinct from the Deep Mind team in London) began by building three fairly ordinary neural networks, which they named Alice, Bob, and Eve.

Each neural network has a very specific goal: Alice is responsible for sending a secure message to Bob; Bob must try to decrypt the message; Eve must try to intercept the message and decrypt it. Alice and Bob have a big advantage against Eve: they start with a shared key (symmetric encryption).

Importantly, the AIs don’t learn how to encrypt information or what encryption techniques should be used: they are simply given a failure function (i.e., failure condition) upon which all other work must be based.

For Eve, the failure function is very simple: measuring the difference between correct and incorrect bits between Alice's original input text and the guess result. For Alice and Bob, the failure function is more complicated: if Bob's guess (measured in terms of bits) is too far from the original input text, it is considered a failure; for Alice, if Eve's guess is more accurate than random guessing, it is considered a failure. In this way, the adversarial creation network (GAN) is constructed.

Alice, Bob, and Eve all share the same “mix and switch” neural network architecture, but are initialized independently and have no other connection except the shared key between Alice and Bob. For Alice, the key and the text are fed into the first layer of the neural network; for Bob, the key and the ciphertext are the input; and for Eve, only the ciphertext is given. The first layer is fully connected, so the text and the key can be fused. Then there are multiple convolutional layers, which are used to learn to apply a function to the bits delivered by the first layer. And they don’t know what the function is, but must learn it step by step. For Alice, the final layer will deliver some ciphertext, and Bob and Eve will output what they think is the correct plaintext.

Bob and Eve continuously reconstruct the error during the training process. It can be seen that Eve has been constantly improving, but Alice and Bob are also changing the encryption method and constantly thwarting Eve's "conspiracy".

The results are... complicated. Some of the results failed completely, and Bob was unable to reconstruct the message given by Alice. But in most cases, Alice and Bob were able to build an evolving system, and the communication and understanding between the two parties maintained a very low error rate.

In some tests, Eve did give an interpretation that was better than random guessing, but Alice and Bob would often improve their encryption techniques until Eve failed again (see chart for details).

The researchers were unable to conduct a detailed analysis of the encryption method designed by Alice and Bob, but through specific training, they found that Alice and Bob rely on both the key and the plaintext content. "But it is not a simple XOR. Specifically, the output value is usually a floating point value other than 0 and 1," they explained.

In summary, researchers Martin Abadi and David G. Andersen said that these neural networks can indeed learn how to protect the content of communications by only telling Alice the encrypted content itself - and more importantly, this confidentiality effect can be achieved without specifying a specific set of encryption algorithms.

Of course, there are plenty of other techniques available besides symmetric encryption of data, and the researchers note that future work will look at steganography (hiding data within other media) and asymmetric (public key) encryption.

As for whether Eve will be a strong security challenge, the researchers said: "While it seems unlikely that neural networks will play a significant role in cryptanalysis, they have great potential for metadata and traffic analysis."

Original title: Google AI invents its own cryptographic algorithm; no one knows how it works

Original author: Sebastian Antony

[Translated by 51CTO. Please indicate the original translator and source as 51CTO.com when reprinting on partner sites]

<<:  A brief analysis of Android's garbage collection and memory leaks

>>:  How to build an Android MVVM application framework

Recommend

Under the new circumstances, what minefields must self-media stay away from?

This afternoon, the industry broke the news: In o...

Don't stop me, I'll go buy some fruit after work! I'm so strong now!

One of the happiest things in summer Just lying o...

Lei Jun and Zhou Hongyi battle against each other: Can Xiaomi defeat 360 again?

[[133571]] Three years ago, Qihoo 360 launched a ...

I suddenly have a headache. Is it a serious illness?

One minute with the doctor, the postures are cons...