My thoughts on ChatGPT

Thank you for tuning in to this audio only podcast presentation. This is week 98 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “My thoughts on ChatGPT.”

We are going to break out of the planned programming and go where everybody else involved in the machine learning space is going to go this week. Something new arrived this week and has captured the attention of the public mind. Seriously, I thought stable diffusion would be the big thing for 2022, but something else arrived in the intellectual space that might be more influential in the long run. The contents of this post were written right after the release, and I have been tinkering with that content throughout the week. 

Staying current and staying well-grounded in the field of machine learning has been increasingly difficult. Those two things are ultimately very hard to achieve at the same time. This is an area with a great deal of breath and depth. I say that after writing 98 consecutively published weekly installments of a Substack machine learning newsletter. The main example this week would be of the new interactive session based chat framework (bot) that OpenAI released this week. Like most of the people actively curious about how bot’s have improved these days, I went out to which the team over at OpenAI blogged about here All you need to do is create an account and you can sign in and chat with the bot. Each new session is tabula rasa as a reset to the model without the additional layer of your previous interactions. This is the interesting part of the equation as building a model and then having the model keep context within a conversation or a series of conversations. That ability to keep conversational context across multiple conversations is not a part of the current deployment. 

The research preview they are sharing right now does not have access to the internet. It was also trained on data from about a year ago. Given the size of the language model they are invoking I would think it has a very large knowledge graph included, but that does not really appear to be the case. I gave it the following series of prompts to see what would happen in terms of how it generates content. The answers are in screenshot to help identify that the content was not created by me as a part of my normal writing output. 

My prompt: “write 10 words about machine learning”

My prompt: “write 25 words about machine learning”

My prompt: “write 50 words about machine learning”

My prompt: “write 100 words about machine learning”

The last prompt I used was to ask it to “write a paper about machine learning with citations” which caused it to spit out 5 paragraphs that were pretty good. 

This topic is way too early for academic articles [1]. A lot of news articles have been written about OpenAI’s chatbot they recently shared called ChatGPT. Here are 4 of them that came out this week:

“OpenAI’s new chatbot can explain code and write sitcom scripts but is still easily tricked”

“OpenAI’s new ChatGPT is scary-good, crazy-fun, and—so far—not particularly evil.” 

“OpenAI invites everyone to test new AI-powered chatbot—with amusing results”

Links and thoughts:

“E106: SBF’s media strategy, FTX culpability, ChatGPT, SaaS slowdown & more”

“Why Do I Keep Getting Called Out – WAN Show December 2, 2022”

Top 6 “ChatGPT” Tweets of the week:



What’s next for The Lindahl Letter?

  • Week 99: Deep generative models
  • Week 100: Overcrowding and ML
  • Week 101: Back to the ROI for ML 
  • Week 102: ML pracademics
  • Week 103: Rethinking the future of ML
  • Week 104: That 2nd year of posting recap

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

Leave a Reply