Thank you for tuning in to this audio only podcast presentation. This is week 123 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “We are wholesale oversubscribed on AI related content.”
You may have noticed that the topic for the post this week changed. I called an audible and wrote a different piece of content.
All that AI content has hit a real saturation point. We are wholesale oversubscribed on AI related content and we have not even really crossed into the situation I’m concerned about where people are using the content to flood the open internet with synthetic content. Generally given the current state of the AI excitement level and some of the fear, apprehension, and concern a lot of people are writing or commenting. All that commenting has built into a very real saturation point where AI content is literally everywhere. Sam Altman of OpenAI recently noted in a Wired article that, “the Age of Giant AI Models Is Already Over” . The initial release date of ChatGPT was November 30, 2022 so the cycle on this one was no more than a 6 month explosion of hype content. You can go out to Google Trends and pretty quickly get a sense of just how fast ChatGPT spun up in December of 2022 . If you went out and added a comparison term within Google Trends of auto-gpt or AutoGPT, then you would get a sense of just how much more popular ChatGPT happens to be in the wild . Keep in mind that since April 11, 2023 the searches for AutoGPT have taken off exponentially .
Major things are happening in terms of ChatGPT, AutoGPT, and people expanding how agents are able to cooperate. A paper from Park et al. that was just published on April 7, 2023 shows some very interesting use cases for simulating behavior using generative agents.
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior. arXiv preprint arXiv:2304.03442. https://arxiv.org/pdf/2304.03442.pdf
Rewind a little bit and I thought the key Stanford University paper would be that 214 page multi author one on foundation models.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., … & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. https://arxiv.org/pdf/2108.07258.pdf
You can get a sense from the demonstration of capability and use that occurred in that paper on generative agents just a couple years after scholars circled the wagons on foundation models just how fast the field is changing. Things are changing so fast that Sam Altman above noted that the age of giant models might have arrived and be complete between the publication of those two papers. It can make conducting research a very interesting thing to contemplate. Content at the bleeding edge of AI technology today could very well encounter a context, vibeshift, or even meaning change within months. Taking that into consideration I’m going to recognize that we are approaching the halfway point on the production of this year’s Substack content. To that end, I’m starting to ponder how to improve things and really make things better in terms of the content that is being produced.
What’s next for The Lindahl Letter?
- Week 124: Profiling OpenAI
- Week 125: Profiling Hugging Face (open and collaborative machine learning)
- Week 126: Profiling Deep Mind
- Week 127: Democratizing AI systems
- Week 128: Building dreaming into AI systems
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.