How does confidential computing work?

Thank you for tuning in to this audio only podcast presentation. This is week 114 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “How does confidential computing work?”

Imagine you move part of your workload to an environment that you don’t fully own. That used to be a datacenter and more recently it is probably a cloud environment. Sooner or later you might think about how to secure that environment in terms of hardware-based security or other ways to secure things. Most of the promotional pages on the internet covering this one try to bring forward the idea of securing code and data [1]. Seriously, the idea of confidential computing is everywhere on these cloud pages, but not something you hear about all the time generally [2][3][4]. It is a topic that receives a lot of industry specific coverage related to managing and operating cloud environments. 

The low number of highly cited “confidential computing” articles over at Google Scholar was probably indicative of the relative interest in this topic [5]. Very few of the available academic works are highly cited in this area. It’s actually an area where some research opportunity exists related to a survey of security and the outcomes of confidential computing in practice. 

A few of the more interesting ones with a higher level of citation ended up being:

Mulligan, D. P., Petri, G., Spinale, N., Stockwell, G., & Vincent, H. J. (2021, September). Confidential Computing—a brave new world. In 2021 international symposium on secure and private execution environment design (SEED) (pp. 132-138). IEEE.

Rashid, F. Y. (2020). The rise of confidential computing: Big tech companies are adopting a new security model to protect data while it’s in use-[news]. IEEE Spectrum, 57(6), 8-9.

Zhu, J., Hou, R., Wang, X., Wang, W., Cao, J., Zhao, B., … & Meng, D. (2020, May). Enabling rack-scale confidential computing using heterogeneous trusted execution environment. In 2020 IEEE Symposium on Security and Privacy (SP) (pp. 1450-1465). IEEE.

The central question here seems to be if you are going to have your data and run your workloads in the cloud how do you really make sure that effort is confidential. At this point in my analysis, I thought it might be interesting to see what ChatGPT would produce on this topic. 

OpenAI ChatGPT Prompt: “scrap that and rewrite it with ethics included”

Sure, here’s a rewritten version of the substack post about AI and confidential computing with a focus on ethics:

The intersection of AI and confidential computing has become increasingly important as we realize the importance of privacy and security in computing. Confidential computing is a field that seeks to protect sensitive data and computations by ensuring that they remain encrypted and secure throughout the computing process. Meanwhile, AI relies heavily on large amounts of data to make accurate predictions and decisions, but this data is often sensitive and needs to be kept confidential to protect individuals’ privacy. This is where confidential computing can play an important role, as it can ensure that sensitive information remains private, even when being processed by AI algorithms.

The integration of confidential computing into AI has several benefits. First, it ensures that personal data is kept secure and private, which is crucial for industries such as healthcare and finance. Second, it enables collaboration between organizations without risking the leak of sensitive information. Lastly, it can improve the accuracy and effectiveness of AI models as more data can be used for training without compromising privacy.

However, there are also ethical considerations when it comes to the intersection of AI and confidential computing. For example, there is a risk that confidential computing could be used to conceal unethical or discriminatory practices. It is important to consider how confidential computing is used and to ensure that it is not used to violate ethical principles. Additionally, there is a need to ensure that confidential computing is not used to perpetuate biases that already exist in datasets.

To address these ethical concerns, it is important to ensure that the development and deployment of confidential computing technologies are guided by ethical principles. Transparency and accountability should be central to the development of these technologies, and there should be a focus on ensuring that they are used in a way that promotes fairness, accountability, and transparency. This requires collaboration between computer scientists, ethicists, and policymakers to ensure that these principles are integrated into the design and use of confidential computing technologies.

In conclusion, the intersection of AI and confidential computing has promising implications for privacy and security. However, it is important to ensure that these technologies are developed and used ethically. By integrating ethical considerations into the design and use of confidential computing technologies, we can ensure that these technologies are used in a way that promotes fairness, accountability, and transparency.

Alright, so you can get a sense of what I produced related to confidential computing, then you can get a sense of what the ChatGPT model produced and it’s interesting. It’s an interesting thing here. I spent a couple hours researching and doing work on what exactly people are doing with confidential computing trying to figure out what information was available from scholarly sources on how people were doing confidential computing and how that methodology was occurring and then wrote a Substack post. It took ChatGPT a few seconds to spit out what it produced. I think the depth of the reference to specific sources I think the difference in looking at what happens from the scholarly output compared to how it’s being used, none of that occurred within the ChatGPT. It definitely is very surface level in terms of how it made the argument, but it did produce it very quickly. 

Links and thoughts:

Top 7 Tweets of the week:







What’s next for The Lindahl Letter? 

  • Week 115: A literature review of modern polling methodology
  • Week 116: A literature study of mail polling methodology
  • Week 117: A literature study of non-mail polling methodology
  • Week 118: A paper on political debt as a concept vs. technical debt
  • Week 119: All that bad data abounds 

If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the year ahead.

Leave a Reply