Earlier this week, I sat right by the beach in Florida for six hours. The umbrella and chair helped. I did not bring my Chromebook to spend the time writing. That experience is probably going to happen again tomorrow. Oddly enough, my efforts to engage in some productive writing have been a wash during this trip. That is what I expected to have happened. Earlier in preparing for this degradation in writing time I had worked ahead by a couple of weeks on the writing plan.
Ok, so on the Twitter front I’m still running my tweets in protected status and I gave up on paying them for Twitter Blue. They almost got me to come back yesterday with the annual prepayment discount. I’m more likely to commit to something for a year than on a monthly basis. Naturally, I’ll turn off any auto renewal so that I can make a decision on renewal at the proper time. Perhaps that is a strange conundrum of a preference for annual vs. monthly billing. It’s probably a contrarian opinion about Twitter, but I think it might have gotten worse for a bit then it got better. My feed of things in any event has turned into a better read over the last couple of months.
News feeds overall are highly broken at the moment. As a side effect of the news based media portion of things fundamentally breaking the feeds are awash with poorly crafted content. News rooms while an imperfect gate keeping system provided a perspective and some degree of continuity. It’s a first in the pool free for all right now and just like academics at large the publish or perish mindset overshadows everything.
Thank you for tuning in to this audio only podcast presentation. This is week 122 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “AIaaS: Will AI be a platform or a service? Auto-GPT will disrupt both.”
I spent some time over the last couple of weeks looking for scholarly articles that reference AI as a service (AIaaS) [1]. Then I realized that things are changing so quickly in the landscape of AI these days that none of those articles even consider Auto-GPT [2]. Unless you have been on vacation and hiding from AI news recently you have been hearing about the advent of this plugin, multiagent, and task targeted implementation of a GPT model called Auto-GPT. People are using it to do things. You can see that that GitHub repository already has 50,000 stars and the demo was from March 30, 2023. In terms of open source code projects this one has taken off with rocket boosters. Right now the project has been forked almost 8,000 times. I’m trying to find a version that can be run in a jupyter notebook on Colab. This space is rapidly changing and will probably be changing aton in the next 6 months.
What did the OpenAI ChatGPT 4.0 model spit out on this one when prompted?
AI can be both a platform and a service, depending on how it is implemented and used. The distinction between the two is not mutually exclusive, as AI applications often span across various domains and industries. Let’s explore each concept:
AI as a Platform: An AI platform provides the infrastructure, tools, and libraries necessary for developing and deploying AI applications. These platforms enable developers and data scientists to build, train, and manage AI models without having to start from scratch. Examples of AI platforms include TensorFlow, PyTorch, and OpenAI’s GPT series. In this context, AI serves as a foundational technology upon which various solutions can be built.
AI as a Service (AIaaS): AIaaS refers to the delivery of AI capabilities as a service, usually through APIs or web services. Companies that offer AIaaS provide access to pre-built AI models, which clients can use without having to develop their own AI capabilities. This allows organizations to leverage AI technology without investing in infrastructure or expertise. Examples of AIaaS include Google Cloud AI, IBM Watson, and Amazon Web Services’ machine learning services.
In conclusion, AI can function both as a platform and a service, depending on its implementation and usage. Many companies and developers use AI platforms to create AI services, which can then be integrated into various applications and business processes.
I thought it might be fun to follow up and prompt to see who was the best at AIaaS…
It is difficult to definitively say who the “best” AI-as-a-Service (AIaaS) provider is, as different providers excel in different areas and offer varying services that cater to specific use cases. However, some of the most prominent and reputable AIaaS providers include:
Google Cloud AI: Google offers a comprehensive suite of AI services, including natural language processing, computer vision, and machine learning. Its offerings, such as Google Cloud Vision API, Dialogflow, and AutoML, are widely used and respected for their accuracy and ease of use.
Amazon Web Services (AWS): AWS provides a range of AI services, including machine learning, computer vision, and natural language processing. Some popular offerings are Amazon Rekognition, Amazon Lex, and Amazon SageMaker.
IBM Watson: IBM’s AIaaS platform, Watson, offers various services, including natural language processing, machine learning, and chatbot development. Watson Assistant and Watson Discovery are among its popular services.
Microsoft Azure: Azure offers a suite of AI services, including machine learning, natural language processing, and computer vision. Key offerings include Azure Cognitive Services, Azure Machine Learning, and Azure Bot Service.
The “best” AIaaS provider for your specific needs will depend on several factors, such as the required AI capabilities, pricing, ease of integration, and the level of customization needed. It’s essential to research and evaluate each provider based on your unique requirements to make an informed decision.
Week 125: Profiling Hugging Face (open and collaborative machine learning)
Week 126: Profiling Deep Mind
Week 127: Democratizing AI systems
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.
A multitude of paths forward exist. We elect to use the time we have in different ways.
Keep producing weekly research notes
Build independent study literature reviews
Consider what a return to teaching some online classes would require
Conduct quantitative research aimed at journal article publication
Refine my yearly manuscript creation process
All of that consolidated effort could flow together. Nothing within that packaging would conflict. Right now I’m sitting within content staged until the end of June. Working ahead was a good strategy to allow me to review closely where I’m going and what I’m doing within the framework being used to push things forward. All of that output and effort has to be geared toward building something. It’s part of an overall research trajectory that paths toward something. Outside of that it would be no more than a sustained effort to muddle through the habit of writing. Output would be achieved, but collectively it would have no momentum toward anything. Objectives have to stand just outside the reach of the moment and some even beyond the next range of possible achievements.
Thank you for tuning in to this audio only podcast presentation. This is week 121 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Considering an independent study applied AI syllabus.”
My initial take on writing an independent study based syllabus for applied AI was to find the best collection of freely available scholarly papers that somebody could read as an onramp to beginning to understand the field. That I think is a solid approach to helping somebody get going within a space that is very complex and full of content. It’s a space that is perpetually adding more content than any one person could possibly read or consume. Before you take that approach it is important to understand that one definitive textbook does exist. You certainly could go out and read it.
Russell, S. J. (2010). Artificial intelligence a modern approach. Pearson Education, Inc..
You could find the first edition, second edition, or third edition for sale on eBay or somewhere else if you wanted a physical copy of the book. The book is currently in a 4th edition run, but I don’t have a copy of that edition yet. It’s used by over 1,500 schools so a lot of copies exist out in the wild [1]. The authors Stewuart Russell and Peter Norvig have shared a PDF of the bibliography for that weighty tome of AI insights as well [2]. Even with 35 pages of bibliography nobody with the name Lindahl made the cut. On a side note you can find the name Schmidhuber included twice if that sort of thing is important to you.
Let’s reset for a second here. If you are brand new to the field of AI or want to read a textbook based introduction, then you should seriously consider buying a copy of the aforementioned textbook. That is a really great way to start which has worked for tens of thousands of people. My approach here is going to be a little bit unorthodox, but it works for me. My last run at this type of effort was, “An independent study based introduction to machine learning syllabus for 2022” and you can find it out on Google Scholar [3]. This outline will be the basis of a similar type of work that will end up getting crafted in Overleaf and shared out to the world.
Searching for just pure introductions to artificial intelligence is really hit or miss. A lot of different introductions to various fields exist. In this case, I’m trying to zoom out a little more into a larger evaluation of content instead of focusing on any one field. Nothing I ran into during my search had the number of citations or impact of the Russel and Norvig textbook. I’m going to endeavor to structure and organize 70+ articles into a syllabus. To give you an idea of the kind of things that are going to get pulled together here are 5 different papers.
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., … & Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology, 2(4). https://svn.bmj.com/content/svnbmj/2/4/230.full.pdf
Obviously, working out of a textbook format will help provide a better structured introduction and that is why I plan on mentioning that in the paper and I have noted it here. On a scholarly basis I understand the need to dive directly into source papers and really get a sense of what is happening within the applied side of AI research in the last 10 years. That is why I want to spend some time preparing the independent study based syllabus for others as well as it being a good solid effort for me to learn more rapidly with an extreme degree of focus.
What did ChatGPT 4.0 model come up with?
An independent study applied AI syllabus will provide you with the necessary knowledge and skills to develop practical AI solutions. Below is a suggested 14-week syllabus to guide you through the essential topics and resources.
Week 1: Introduction to AI and Machine Learning
History of AI
Overview of machine learning, deep learning, and reinforcement learning
Common machine learning algorithms
Week 2: Python for AI and Machine Learning
Basic Python programming
NumPy, pandas, and matplotlib
Setting up a Python environment for machine learning
Week 3: Data Preprocessing and Exploration
Data cleaning and transformation
Handling missing data and outliers
Feature engineering and selection
Week 4: Supervised Learning – Regression
Linear regression
Ridge and Lasso regression
Decision trees and random forests for regression
Week 5: Supervised Learning – Classification
Logistic regression
k-Nearest Neighbors (k-NN)
Support vector machines (SVM)
Week 6: Unsupervised Learning
k-Means clustering
Hierarchical clustering
Principal component analysis (PCA)
Week 7: Neural Networks and Deep Learning
Introduction to neural networks
Activation functions and backpropagation
Convolutional neural networks (CNNs)
Week 8: Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)
Introduction to RNNs and LSTMs
Time series forecasting
Natural language processing (NLP) applications
Week 9: Reinforcement Learning
Introduction to reinforcement learning
Q-learning and Deep Q-Networks (DQNs)
Policy gradients and actor-critic methods
Week 10: Transfer Learning and Pretrained Models
Introduction to transfer learning
Fine-tuning pretrained models
Pretrained models for NLP and computer vision
Week 11: Natural Language Processing
Word embeddings and Word2Vec
Text classification and sentiment analysis
Text summarization and translation
Week 12: Computer Vision
Object detection and recognition
Semantic segmentation
Image-to-image translation
Week 13: Generative Models
Introduction to generative models
Variational autoencoders (VAEs)
Generative adversarial networks (GANs)
Week 14: Ethics and AI Deployment
Ethical considerations in AI
Bias and fairness in machine learning
Deploying AI models
Key resources to study these topics include popular textbooks, online courses, blogs, and tutorials. Some notable resources include:
“Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow” by Aurélien Géron
“Deep Learning” by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
“Reinforcement Learning: An Introduction” by Richard S. Sutton and Andrew G. Barto
Online courses like Coursera’s “Machine Learning” by Andrew Ng or fast.ai’s “Practical Deep Learning for Coders”
AI research papers from sources like arXiv.org, NeurIPS, and ICML
By following this syllabus, you will gain a solid foundation in applied AI and be well-equipped to pursue more advanced topics or specific application domains.
Week 125: Profiling Hugging Face (open and collaborative machine learning)
Week 126: Profiling Deep Mind
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the year ahead.
I sat down to do a little bit of writing before going to sleep tonight. The next twenty minutes are going to be devoted to putting a few thoughts down in writing. For better or worse that is the plan at the moment. You could just as easily ask ChatGPT to produce 1,000 words of nonsense, but that would be less novel than my musing probably will end up being. It would be a good idea to spend some time with the new models trying to produce some code. It is entirely possible that the newer code generating models would allow me to work out a couple of the things that had drawn my interest twenty years ago. Some of that would be as easy as turning the pseudo code into actual code and seeing what happens. Maybe I’ll even turn some of that into some Android applications that people could download.
This weekend I spent a few minutes trying to figure out what to do with all the old data that resides on my computer. Most of it is backed up to the cloud. I need to spend some time just deleting blocks of data that are no longer required. I’m probably not the only person in the boat of having stored so much data that is probably not needed or useful. At this point in time, I imagine that so much just unwieldy data has been stored and forgotten by a multitude of people. It’s probably a mind boggling number to consider how many photographs that Google has backed up over the years on devices all over the world.
Apparently, during the course of sailing around the ocean it is a good practice to keep a captain’s log for navigation and maintenance reasons. It’s entirely possible that I have been keeping a functional journal about my writing practices for both navigation and maintenance reasons. None of my journaling has been about the ocean in any way shape or form. I don’t really even use analogies or metaphors that are sea inspired. I guess that covers that and we are ready to move on to something else here during this writing session.
My PSA of the day is to give blood if you are able to complete a donation. I try to give blood several times a year. They don’t have a method to make synthetic blood at this time. Donations are an important part of keeping the system running.
Interesting observation after a few days of flipping my tweets to private mode… it turns out that setting has not really changed my Twitter application usage. A few people won’t get my responses, but that is fine in the long run. We could follow each other if some type of actual communication on Twitter was warranted.
My recent writing efforts have involved sitting down with the Pixelbook Go on a more regular basis and just writing whatever comes to mind at the time. That is not always a recipe to yield the productive generation of prose, but it has been good in terms of general writing productivity. Right now I’m watching the Dallas Stars and Seattle Kraken play hockey and just typing away.
Thank you for tuning in to this audio only podcast presentation. This is week 120 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “that one with an obligatory AI trend’s post.”
Right now at the start of 2023, I would probably highlight 3 AI trends: generative models, automation, and legislation. Before we get into those specific topics let’s zoom out for just a second and look at two different reports you could read to get a sense of what is going on right now. One of the great places to start would be with the recently released 2023 AI Index report from the Institute for Human Centered AI.
Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
You can look around the website they have setup here:
If the 386 page PDF seems like a lot of content to consume, then feel free to read the slimed down 2023 state of AI in 14 charts that they also produced
The other interesting report that I read recently was the Google Cloud 2023 Data and AI Trends Report which you can download by giving them your email. That step seemed annoying so I’m going to leave the link to the report below, but it might default back to the information page
Those two reports aside, let’s jump into the 3 topics that will comprise the bulk of this missive.
Generative models – generative models are probably the biggest thing happening at the start of 2023. We are setting chat based and image based models like GPT-4 and DALL-E 2 from OpenAI [1]. One of the things spinning out of the generative models are the low code, no code, and copilot efforts. This is one of the areas where you will find a lot of coverage in the media as the generative models are being used as search engines and the image generating efforts are so easy to use people have gone thumbnail happy using them recently [2].
Automation – We are going to see a tremendous amount of better automation out of the new technology that is available [3]. I’m tempted to mention that I consider a lot of the copilot efforts to be a part of early automation. Those copilot efforts will move from being inline assistive to probably a more prompt based delivery. That type of request and receiving copiloting is what I would consider a form of automation.
Legislation – Countries are wholesale banning ChatGPT [4]. A lot of legislation will probably come into being related to trying to ensure safety [5]. I’m not going to spend a ton of time on this trend as it’s going to unfold in a pretty predictable way.
You might find that this week the large language model from OpenAI has produced a better post than what I crafted. To that end, let’s find out now what the ChatGPT model GPT-4 produces after being prompted about this one…
As of April 2023, the top trends in AI include the following:
Federated Learning: This approach allows AI models to be trained on decentralized data while maintaining privacy. It has gained prominence as a way to harness the potential of data from various sources without compromising user privacy.
AI Ethics and Regulation: As AI systems become more integrated into everyday life, there is an increasing focus on the ethical implications and regulatory measures required to ensure AI is used responsibly.
Explainable AI (XAI): With the growing complexity of AI models, there is a rising demand for models that provide interpretable and understandable explanations for their predictions or decisions.
AI in Healthcare: AI applications in healthcare are expanding rapidly, from diagnosis and treatment recommendations to drug discovery and personalized medicine.
AI for Climate Change: AI techniques are being used to tackle climate change by optimizing energy consumption, forecasting extreme weather events, and modeling climate patterns.
Edge AI: As IoT devices proliferate, there is a growing interest in running AI models on-device (at the edge) rather than relying on cloud-based processing. This approach can improve latency, privacy, and energy efficiency.
AI for Cybersecurity: AI is being deployed to enhance cybersecurity measures by detecting and preventing potential threats and attacks using advanced pattern recognition and anomaly detection.
Reinforcement Learning (RL): RL, a type of machine learning that focuses on decision-making, continues to gain traction as researchers explore its potential applications in various fields, including robotics, finance, and gaming.
Natural Language Processing (NLP): NLP techniques continue to advance, enabling more sophisticated AI applications in areas like language translation, sentiment analysis, and chatbot development.
AI and Robotics: AI is playing a key role in the development of advanced robotics systems, allowing for improved autonomy, decision-making, and adaptability in various domains, including manufacturing, logistics, and personal assistance.
These trends are expected to evolve as AI research and development continues to advance, addressing the opportunities and challenges associated with the integration of AI into various aspects of human life.
Week 121: Considering an independent study applied AI syllabus
Week 122: Will AI be a platform or a service?
Week 123: Considering open source AI
Week 124: Profiling OpenAI
Week 125: Profiling Hugging Face (open and collaborative machine learning)
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.
My Twitter account has been flipped over to “Protect your Tweets” status. That is basically the privacy version of lockdown. I’m letting my blue checkmark expire on May 12 by canceling any renewal mechanism. Generally that action is being taken as the benefits of the program are not worth the expense at the moment. Outside of that scenario, I can confirm that the idea of completing that series of steps was harder to consider than the consequences of actually taking the actions. Participating in the great public commons that Twitter might have represented was a grand idea. Social media is not a shining city on any degree of hill. Mostly it was a promise of something that maybe could never have really been true in actual existence. I’m going back to blogging and just sharing thoughts that way in written form to an audience that mostly shows up from search engines.
I’m going to try to avoid spending large amounts of time on Twitter. My profile will remain and my 10,000 or more tweets still exist, but they are in that protected status.
It really feels like the delivery methods of the modern news media have become broken. It’s somewhat surprising that 15 million people use Feedly as an RSS reader. Google Reader has been gone for a while now (2013). I have spent some time the last couple of days thinking about how people consume news and where and when that attention is applied.
I’m now using a Google Home Max speaker with my desktop setup. It’s been great so far. Apparently, back in 2020 the team over at Google discontinued making this particular speaker product. I have had this one for a long time. This is the first time that I’m using the product with the auxiliary input running straight from the computer without a digital to analog converter (DAC). That device might get put back in the loop at some point. This morning I started working on the last block of content to fill out the month of June. Things are moving right along.