We have a National Artificial Intelligence Advisory Committee

Published on October 28, 2022: Thank you for tuning in to this audio only podcast presentation. This is week 92 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “We have a National Artificial Intelligence Advisory Committee.”

It seems like having a national artificial intelligence initiative is popular these days. Back on February 18, 2022, I shared my week 56 Substack post, “Comparative analysis of national AI strategies.” That missive continues to get a good bit of traffic so I thought now would be a good time to go ahead and revisit national AI strategies, advisory committees, institutes, legislation, and the myriad of research institutes or think tanks that are jumping into this area of consideration. This is an area that I think could be a good place for some solid academic contributions. Instead of digging into all of those areas my attention really got focused on one advisory committee. That will become clear here in the next couple of sections of content.

This week I have considered shifting The Lindahl Letter over to being an AI strategy advisory committee after spending a bunch of time reading about them this week. I’m not going to do that as it would limit my creative output to just one area and that sounds intellectually exhausting. One of them you can read about would be the National AI Advisory Committee (NAIAC) [1]. The next committee meeting was about to happen before writing this post. I had plans to listen live and I was totally signed up for everything [2]. Go forward I’m fully registered and signed up for alerts from the NAIAC. I would be happy to provide them guidance on effective national AI strategies from a comparative perspective, but that has not happened so far. This topic is an interesting space to consider at length. We are seeing a huge amount of academic work and companies like Hugging Face democratizing AI through community. Consider for a moment just how fast stable diffusion showed up and then was actively built into things and deployed. We are seeing massive changes within the ML/AI space and the deployment cycle is super-fast based on how interconnected the community happens to be worldwide. That has huge ramifications for any advisory committee considering the national level of AI strategy. Adapting to the rate of change and decentralized nature of things requires a different type of national AI strategy. I’ll be listening to the NAIAC in October to see how things are going. You can find the sessions on YouTube by searching for “NAIAC” pretty easily. 

“National Artificial Intelligence Advisory Committee (NAIAC) Meeting”

“National Artificial Intelligence Advisory Committee (NAIAC) Field Hearing”

You could read the meeting minutes from May 4, 2022.

https://www.ai.gov/wp-content/uploads/2022/07/NAIAC-Minutes-05042022.pdf

I went out to Google Scholar and took a look to see if anybody had published or shared anything with this advisory committee referenced [3]. Nothing really came up except the above-mentioned meeting minutes from May 4, 2022. Nothing really showed up during a search of arXiv either [4]. It’s possible in about 6 months more content will show up reacting to the hours of meetings that are linked above. Right now, we appear to be a little bit ahead of things in terms of reactions to the work being done by this advisory committee. I’m going to keep an eye out for more content related to NAIAC. It’s possible sometime next year it will be the right time to dig back into this one.

Links and thoughts:

“5 Practical Machine Learning Lessons You’re NOT Taught in School”

“This Has Never Happened Before – WAN Show October 14, 2022”

“Stanford CS330: Deep Multi-Task & Meta Learning I 2021 I Lecture 4”

“Confusing new Apple products, Netflix password sharing, and NFT cults”

https://podcasts.apple.com/us/podcast/confusing-new-apple-products-netflix-password-sharing/id430333725?i=1000583390836

Top 5 Tweets of the week:

Footnotes:

[1] https://www.ai.gov/naiac/ 

[2] https://events.nist.gov/profile/form/index.cfm?PKformID=0x17861abcd 

[3] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C6&q=National+Artificial+Intelligence+Advisory+Committee&btnG= 

[4] https://search.arxiv.org/?in=&query=%22National%20Artificial%20Intelligence%20Advisory%20Committee%22 

What’s next for The Lindahl Letter?

  • Week 93: Papers critical of ML
  • Week 94: AI hardware (RISC-V AI Chips)
  • Week 95: Quantum machine learning 
  • Week 96: Where are large language models going?
  • Week 97: MIT’s Twist Quantum programming language

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

What is probabilistic machine learning?

Thank you for tuning in to this audio only podcast presentation. This is week 90 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “What is probabilistic machine learning?”

The post this week is going to be on the shorter side of things. I think that is in part due to the very straightforward nature of the topic under consideration. It really could have just been a link to a single book on the subject with a polite note that reading it would help you understand pretty much everything you need to know. To that end, it looks like the book on probabilistic machine learning from Kevin Patrick Murphy has been downloaded 168 thousand times [1]. That is pretty darn good for something in the machine learning space where the inflection point is generally over or under around 10,000 points of interest on a topic. It appears that Kevin really surpassed that ceiling by a ton of downloads. The book is very easy to get to and the search engines really seem to algorithmically love it as well. Given that this topic has a lot of refences to Bayesian decision theory you probably could predict that it would get my full attention. The topics that generally grounds all of my efforts in the machine learning space is that background in statistics and my enjoyment of working with Bayesian pooling. Let’s begin to breakdown the idea of probabilistic machine leaning involves understanding two general steps. First, you must accept that you want to explain observed data with your machine learning models. Second, those explanations are going to need to come from inferring plausible models to aid you in that explanation. Together those two steps help you begin to evaluate data in a probabilistic way which means that you are aided by the power of statistical probability grounding you to a rational approach. To me this sort of spells out an approach that is not based on randomness or anything particularly chaotic. 

Murphy, K. P. (2012). Machine learning: a probabilistic perspective. MIT press. https://research.google/pubs/pub38136.pdf 

Probabilistic machine learning papers

Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature521(7553), 452-459. https://www.repository.cam.ac.uk/bitstream/handle/1810/248538/Ghahramani%25202015%2520Nature.pdf?sequence=1 

Rain, C. (2013). Sentiment analysis in amazon reviews using probabilistic machine learning. Swarthmore College. https://www.sccs.swarthmore.edu/users/15/crain1/files/NLP_Final_Project.pdf 

Probabilistic deep learning papers

Nie, S., Zheng, M., & Ji, Q. (2018). The deep regression bayesian network and its applications: Probabilistic deep learning for computer vision. IEEE Signal Processing Magazine35(1), 101-111. https://sites.ecse.rpi.edu/~cvrl/Publication/pdf/Nie2018.pdf 

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Shao, X., Trapp, M., … & Ghahramani, Z. (2020, August). Random sum-product networks: A simple and effective approach to probabilistic deep learning. In Uncertainty in Artificial Intelligence (pp. 334-344). PMLR. http://proceedings.mlr.press/v115/peharz20a/peharz20a.pdf 

Andersson, T. R., Hosking, J. S., Pérez-Ortiz, M., Paige, B., Elliott, A., Russell, C., … & Shuckburgh, E. (2021). Seasonal Arctic sea ice forecasting with probabilistic deep learning. Nature communications12(1), 1-12. https://www.nature.com/articles/s41467-021-25257-4?tpcc=nleyeonai 

Links and thoughts:

“How Arm conquered the chip market without making a single chip, with CEO Rene Haas”

https://podcasts.apple.com/us/podcast/how-arm-conquered-the-chip-market-without-making/id1011668648?i=1000580774253

Top 5 Tweets of the week:

Footnotes:

[1] https://probml.github.io/pml-book/book1.html 

What’s next for The Lindahl Letter?

  • Week 91: What are ensemble ML models?
  • Week 92: National AI strategies revisited
  • Week 93: Papers critical of ML
  • Week 94: AI hardware (RISC-V AI Chips)
  • Week 95: Quantum machine learning 

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

That ML model is not an AGI

Thank you for tuning in to this audio only podcast presentation. This is week 89 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “That ML model is not an AGI.”

A lot of people talk about deploying AI in the business world and almost all that conjecture is entirely based on deploying a machine learning model into a production environment or some interesting POC. When those same people deploy an actual AI product into production, they will hopefully see the difference. They are not the same. A lot of the AI hype is underpinned by advances in machine learning. Artificial general intelligence or more commonly abbreviated as AGI represents an interesting summation of possibility contained in a name. You have seen representations of AGIs in books, movies, comics, and all sorts of works of fictions. At the moment, machine learning models are generally trained to do one thing well and cannot generally pick up and learn tasking like a person would. 

That is why most of those fiction writers do not bother to include a machine learning model as the antagonist in stories. The expectation is that a person (or villain for that matter) would be able to generally pick up and learn tasking for a wide variety or purposes. That exception gets rolled up into what an AGI would be expected to achieve in practice. Generally, the expectation would be that the AGI could complete a mix of tasking just like a person would be able to handle. You would need a large number of machine learning models to complete the tasking that a person does in a single day. You could test this as a practical exercise with a sheet of paper and a pen throughout the day. As you built up the list of machine learning models you would need throughout the day to accomplish all the various tasking it would become very obvious that your ML model is not an AGI as your list would be much greater than a single model. Or even a small collection of models being sorted out by a piece of software upfront. To be fair to the idea contained within that point, we don’t even have a good method to switch between a collection of ML models to deploy a collection to complete a variety of tasks. 

Artificial general intelligence – Let’s begin by digging into a few books and papers related to AGI before introducing the ML part of the equation. This will help create a foundation for the concept and scholarly evaluation of AGI without spending as much time on ML. You will find a theme in the literature here were Goertzel is prominently featured. 

Goertzel, B., Orseau, L., & Snaider, J. (2015). Artificial general intelligence. Scholarpedia10(11), 31847. http://var.scholarpedia.org/article/Artificial_General_Intelligence 

Goertzel, B. (2007). Artificial general intelligence (Vol. 2). C. Pennachin (Ed.). New York: Springer. https://www.researchgate.net/profile/Prof_Dr_Hugo_De_GARIS/publication/226000160_Artificial_Brains/links/55d1e55308ae2496ee658634/Artificial-Brains.pdf 

Goertzel, B. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence5(1), 1. https://sciendo.com/abstract/journals/jagi/5/1/article-p1.xml 

I did discover along the way that Dr. Ben Goertzel who has papers referenced above has made a lot of content on YouTube. You may remember some of the Sophia the robot content (Hanson Robotics) from 2016 to 2018 as it was fairly prevalent in the media. You can read and article from The Verge about this one [1].If you wanted to dig into a more video based set of content, then feel free to check out this 7 video playlist on the general theory of general intelligence:

Machine learning – This next set of research will consider both ML and AGI together. 

Pei, J., Deng, L., Song, S., Zhao, M., Zhang, Y., Wu, S., … & Shi, L. (2019). Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature572(7767), 106-111. https://aiichironakano.github.io/cs653/Pei-ArtificialGeneralIntelligenceChip-Nature19.pdf 

Silver, D. L. (2011, August). Machine lifelong learning: Challenges and benefits for artificial general intelligence. In International conference on artificial general intelligence (pp. 370-375). Springer, Berlin, Heidelberg. https://www.researchgate.net/profile/Daniel-Silver-3/publication/221328970_Machine_Lifelong_Learning_Challenges_and_Benefits_for_Artificial_General_Intelligence/links/00463515d5bc70ed5c000000/Machine-Lifelong-Learning-Challenges-and-Benefits-for-Artificial-General-Intelligence.pdf 

Goertzel, B. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence5(1), 1. https://sciendo.com/abstract/journals/jagi/5/1/article-p1.xml 

Conclusion – Back during week 62, I started to question how close we were to touching the singularity and that question aligns somewhat to when we will see a true AGI. A well referenced paper was mentioned titled, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” published in 2016 by Vincent C. Müller and Nick Bostrom [2]. 

Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555-572). Springer, Cham. https://philpapers.org/rec/MLLFPI

Within that paper they note that expert opinions found that there is a 50/50 chance between 2040 and 2050 that a general artificial intelligence or AGI would spring into existence or be created. Keep in mind that debating when it will happen does not judge the ethics of creating it and what purpose it would have. Arguments can be made and are being made about if the singularity is inherently good or bad for civil society and civility in general. That is not a consideration I’m working with at the moment. My consideration of this is as an event or more to the point right before the event occurs. I did go back and read an article from a 2015 issue of the New Yorker magazine online called, “The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?” [3]. That article is principally about Nick Bostrom and does consider utopia and destruction if you want to go give that a read. 

Links and thoughts:

This was a really solid conversation between Kara and Chris. The discussions of journalistic ethics and making choices is what caught my attention. “Chris Cuomo’s Comeback”

https://podcasts.apple.com/us/podcast/chris-cuomos-comeback/id1643307527?i=1000580637440

Top 5 Tweets of the week:

Footnotes:

[1] https://www.theverge.com/2017/11/10/16617092/sophia-the-robot-citizen-ai-hanson-robotics-ben-goertzel 

[2] fhttps://philpapers.org/rec/MLLFPI

[3] https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom

What’s next for The Lindahl Letter?

  • Week 90: What is probabilistic machine learning?
  • Week 91: What are ensemble ML models?
  • Week 92: National AI strategies revisited
  • Week 93: Papers critical of ML
  • Week 94: AI hardware (RISC-V AI Chips)
  • Week 95: Quantum machine learning 
  • Week 96: Where are large language models going?
  • Week 97: MIT’s Twist Quantum programming language
  • Week 98: Deep generative models
  • Week 99: Overcrowding and ML
  • Week 100: Back to ML ROI
  • Week 101: Revisiting my MLOps paper
  • Week 102: ML pracademics
  • Week 103: Rethinking the future of ML
  • Week 104: That 2nd year of posting recap

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.