Profiling Google DeepMind Security

Thank you for tuning in to this audio only podcast presentation. This is week 127 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Profiling Google DeepMind Security.”

Things have been happening with Google DeepMind. A lot of news has been dropping since I started crafting this post. I don’t want to write about the news of the moment on this one. The context and analysis within this post is going to try to be deeper than a base reaction to that recent news. The Verge (and a ton of other people) reported that, “Google’s big AI push will combine Brain and DeepMind into one team” [1]. Apparently, some drama exists around those two organizations being combined. We will almost certainly see some books about this from the principal players. Let’s hope they turn out as good as the 2020 memoir “Uncanny Valley” by Anna Wiener. Some of them won’t be that dynamic, but they should be. 

I spent some time reading content from DeepMind. They have a solid research page [2]. It highlights some interesting things including AlphaFold, WaveNet, and AlphaGo [3][4][5]. They have a lot of content out on GitHub in a variety of repositories [6]. They have over 190 repositories with a variety of code you could go take a look at. You may even notice that they have signed the “Lethal Autonomous Weapons Pledge” [7][8].

Some good conversation is occurring about what happens when AI hallucinates, bias bounties, new bug bounties, and how red teams work [9]. You could also find out a little more about how the GitLab AI-based security feature helps to scan codebases for vulnerabilities [10]. You could go check out the Google Cloud Security Podcast EP52 Securing AI with DeepMind CISO Vijay Bolina from February 14, 2022 [11].

You really have to consider that even the best pair programming with assistive technology could introduce security vulnerabilities [12]. We could probably spend an entire week looking at the medical artificial intelligence system or more clearly named Med-PaLM 2 that was released [13]. They also shared an interesting method to discover faster matrix multiplication algorithms which I thought was rather interesting [14]. 

Don’t worry I did go out and grab a couple (really 4) scholarly papers to share with you this week:

Beattie, C., Leibo, J. Z., Teplyashin, D., Ward, T., Wainwright, M., Küttler, H., … & Petersen, S. (2016). Deepmind lab. arXiv preprint arXiv:1612.03801.

Powles, J., & Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health and technology, 7(4), 351-367.

Tassa, Y., Doron, Y., Muldal, A., Erez, T., Li, Y., Casas, D. D. L., … & Riedmiller, M. (2018). Deepmind control suite. arXiv preprint arXiv:1801.00690. 

Evans, R., & Gao, J. (2016). Deepmind ai reduces google data centre cooling bill by 40%. DeepMind blog, 20, 158. 

Links and thoughts:

Top 5 Tweets of the week:
















What’s next for The Lindahl Letter? 

  • Week 128: Democratizing AI system security
  • Week 129: Snapchat Security
  • Week 130: Generative model security
  • Week 131: Profiling Microsoft Azure security
  • Week 132: Engineering discipline security

If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

Leave a Reply