All that AI knowledge remains
So many people have professed expertise in the AI space. The hype alone is a cottage industry that has netted consulting groups billions of dollars. That is not a typo or an exaggeration. My initial writings were focused on defining an ROI or figuring out how to take action in the space and make money. None of those missives was hugely popular at the time. That was not the message of promise and hype that people wanted to read about and, for the most part, be invested in as things moved along. Right now, I’m watching the first period of Game 6 of the Stanley Cup Finals on HBO streaming via the TNT Sports stream. This much I added on the HBO service just to watch the Finals. It turns out that viewership for this Finals series is not spectacular. It would have been better if the broadcast was over-the-air instead of hidden behind the paywall of cable or streaming. It’s a lot like that fact that the reality of AI is hidden behind the paywall of hype, which just extracts investment dollars from people.
I’m sure at some point, we will get to the point where the promise of the technology delivers results that are definable and repeatable. I think in the end, it will end up being about the use cases and the ability of models to deliver value within that workflow. A gap between expectation and reality exists. It’s like one of those mostly unspoken things that people really know exists, but sometimes it is just easier to bask in the bright lights of the hype. I shared a link to a paper from Machine Learning Research at Apple that questioned the effectiveness of reasoning models [1].1 Maybe we will see more of these types of articles that really question what is going on and in some ways explain where the hype has gone wrong. It’s entirely possible that even an article from Apple won’t be enough to make a dent in the hype.
We are being overloaded by an onslaught of AI slop online. We are even starting to get a few articles covering this type of reality [2].2 So much of the slop exists these days that it is just hard to figure out what is real and what is just synthetic nonsense. This writing project, however, is totally organic for better or worse. I spend some of my evening writing each and every day now, powered by Substack instead of WordPress. I sit down with the keyboard and try to produce 500 or more words. It would take one of the models just a few seconds to make that happen. It takes me about an hour of writing and editing to achieve the same thing that model spits out in a matter of seconds. Obviously, I’m partial to organic prose, but a lot of synthetic content is being produced at a much more prodigious rate than I could ever manage.
This is the first post in this new series where footnotes are present. I elected not to use the footnote feature built into Substack in favor of my normal footnote structure. That involves using the inline citation number with the footnotes at the end. You are probably used to seeing this particular footnote method. I’m going to stick with it even after moving over to Substack. I know they want to use this link-ended to a footnote feature that just shows the content at the end. Actually, for this one post, I’m going to just both so you can visually see the difference. Maybe I should give in and use the standard Substack feature. I’m sure they put a lot of thought into how to surface footnotes. My method is tried and true and has been used for a lot longer than Substack has existed, but I suppose either method is valid.
Footnotes:
[1] https://machinelearning.apple.com/research/illusion-of-thinking
[2] https://futurism.com/chatgpt-polluted-ruined-ai-development