Profiling OpenAI Security

Thank you for tuning in to this audio only podcast presentation. This is week 125 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Profiling OpenAI security.”

Security professionals the world over are concerned about what people like some of Samsung’s engineers might be doing with OpenAI’s ChatGPT services [1]. Let’s back up just a bit from that very real breaking news security reality and make sure to set the stage. Instead of starting with a complete corporate history about OpenAI, you knew that I would go out and search Google Scholar to find some of the key OpenAI related scholarly works [2]. Alternatively, you could surf over to the OpenAI website and find the section they have for over 100 papers and just download the PDFs from the source [3]. You could download papers like the much maligned “GPT-4 Technical Report” that was lengthy at over 100 pages, but did not go into the model mechanics people were interested in understanding [4]. Other papers are hosted on that page like the Dall-E-2 paper “Hierarchical Text-Conditional Image Generation with CLIP Latents” [5]. It is a lot of great content and you will get a sense that at one point OpenAI was sharing and building a great legacy of published content that helped people really understand where artificial intelligence was going. They also did some really awesome work building agents that competed in pretty complex video games against world class players. That part of the equation is what caught my attention and really pulled me into trying to understand OpenAI. 

Here are 5 interesting papers with a solid number of citations:

Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130. 

Berner, C., Brockman, G., Chan, B., Cheung, V., Dębiak, P., Dennison, C., … & Zhang, S. (2019). Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680. 

Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). Openai gym. arXiv preprint arXiv:1606.01540.

Gawłowicz, P., & Zubow, A. (2019, November). Ns-3 meets openai gym: The playground for machine learning in networking research. In Proceedings of the 22nd International ACM Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (pp. 113-120). 

Zamora, I., Lopez, N. G., Vilches, V. M., & Cordero, A. H. (2016). Extending the openai gym for robotics: a toolkit for reinforcement learning using ros and gazebo. arXiv preprint arXiv:1608.05742.

Now that we have covered some of the decently cited scholarly content it is probably good to zoom out from that space into the broader media coverage of OpenAI. Most of it is about the risk, reward, and fear that this type of generative AI fosters. You will see that security is not at the forefront of these discussions. 

From TheVerge “OpenAI co-founder on company’s past approach to openly sharing research: ‘We were wrong’”. Ok. Where do you go from there…

You can hear Greg Brockman who was an OpenAI cofounder talk about the astonishing potential during a 30 minutes TED talk.

The CEO of OpenAI Sam Altman sat down with ABC News for 20 minutes here:

You can see a long form interview with Lex Fridman and Sam Altman here:

They have an OpenAI YouTube channel that gets a lot of views. They used to share some pretty interesting videos.

Ok at this point in our journey here, we have covered OpenAI in general and looked at what is being created. Now it’s probably best to try to explain the drama that surrounds the company as quickly as possible. Oddly, it’s not a drama focused on the technology being created that certainly has entered the public mind. Elon Musk was an initial board member of OpenAI back in 2015 [6]. It used to be a non-profit and things have changed. That is where the real drama exists about the company. Microsoft now has a multi-billion dollar investment in OpenAI [7]. You can read the Microsoft official blog where they detail that they have invested in OpenAI three times including 2019, 2021, and 2023 and is now the exclusive cloud provider [7]. You might be thinking how did Elon Musk invest or more accurately donate 100 million to OpenAI which now is clearly partnered with Microsoft [9]. It’s a confusing plot twist for sure and a lot of media coverage exists trying to explain how that exactly happened [10]. I’m sure we will learn more about what exactly happened in the next couple of years.

Let’s circle back to the security realities of using things like OpenAI’s ChatGPT system. People in this case are going to use the system to input things to the prompt. Some of that input might very well be intellectual property owned distinctly by somebody. You could ask a model to create more content in an author’s style or to extend the story of characters. It’s also entirely possible that somebody might feed in code for review or ask the model to produce code. 












What’s next for The Lindahl Letter? 

  • Week 126: Profiling Hugging Face Security
  • Week 127: Profiling DeepMind Security
  • Week 128: Democratizing AI system security
  • Week 129: Snapchat Security
  • Week 130: Generative model security

If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

Leave a Reply