First, I just wanted to thank the folks at Cornell University Library for hosting and supporting open access to ArXiv. I have read a ton of articles from that repository. The good folks over at Google chose this open access repository to publish a new paper. That means that reading the paper is free and incredibly easy to access.
https://arxiv.org/pdf/1610.06918v1.pdf
I read the recently published October 21, 2016 Martin Abadi and David G. Anderson article, “Learning to protect communication with adversarial neural cryptography”. It was a surprisingly easy read. The language is reasonably presented with large sections being written without being overly technical. The paper is only 15 pages long. The abstract opens with, “We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an Adversary.” I ran into a lot of coverage of this paper on various tech news sites. That prompted me to go out and read it myself. It is an academic style and moderator accepted paper that really did receive a lot mainstream coverage. We have to appreciate when highly technical prose can break through to receive news coverage.
The premise is very interesting. The team at Google Brain appear to have actually set up 3 neural networks. The first two networks were named Bob and Alice. Both of those networks were designed to communicate with each other securely. The third network Eve was designed to intercept communications from Bob and Alice. All the networks were aware of each other, but only Bob and Alice were working toward a common goal to secure communications. The most interesting part of the paper is the conclusion that neural networks can protect communications. That means Bob and Alice were successful. The Eve neural network did not succeed at intercepting the secured communications. The experiment was done in TensorFlow. I’m hopeful the code really does get dropped publicly for replication. The paper noted, “We plan to release the source code for the experiments”. I would like to set this up on my computer running Ubuntu Studio and TensorFlow for fun.