As it stands right now week 113 of The Lindahl Letter is going to be a long post. Recording it is going to take a good bit of time. That process will probably take about 30 minutes. It should be interesting, which will be a good thing. Right now in the evenings I’m using some of my time that is normally reserved for winding down to work on pushing ahead to build back up that 5 week backlog of content. Generally speaking, a longer editing and review process should produce better content. I’m sure sometimes a good single session post works out well enough, but in the aggregate a little bit of editing and review will improve things.
Like most people I cannot fully edit something that I just wrote. Some time has to pass between the initial bit of writing and the editing pass. Sure some people can switch back and forth with ease, but I miss things. My grammarian tendencies are stronger with gaps between writing and editing for sure. Right at the start of the day when nothing else is happening and no distractions are present I have some golden time that can be spent writing. For the most part this is about an hour to an hour and half of time that includes no disruptions whatsoever. Nobody else in the house is awake and no other disruptions are going to come into focus. It’s just a question of how that golden time is going to be used.
For the most part I use the first little bit of that time to engage in some stream of consciousness style writing. A blank word processing document gets opened up and I start to type. Sometimes this yields a page or so of prose that could go in any real direction. Whatever is top of mind bubbles up and gets my focus and attention until a shift occurs. At some point, it always happens where I stop being in the moment and focusing on just writing and begin to work on some other targeted project. Rarely I will be so consumed by an effort that the moment I wake up I’ll start working on that project and won’t spend the time to clear and focus my thoughts beforehand.
Some people believe in meditation to get to a relaxed state of calm and focus. Based on my needs I can get to that moment of zerospace based on just letting my mind wonder until the stream of thoughts slows down. Part of that is being in the practice of having a daily writing routine. I imagine that sitting down only every once in a while would take a long time to get all the lines of thought down and to reach that point of calmness and reflection. That is one of the reasons that protecting my golden time for writing is so important. It sets up the day and puts me in the right position to work on the hardest things first at the start of the day. That is what works best for me and is a tried and true pattern of habit that I ruthlessly support.
My new Oura ring showed up. The data presentation layer built for that product is different from what Fitbit uses by a good margin. I’m still adjusting to trying to better understand the metrics. Within the packaging information it was noted that it might take 2 weeks for the tracking to become more personalized. So far it appears like I did order the correct size ring and that things are working well enough after 2 nights of sleep.
This weekend was a very productive one. At the start of the weekend, I had no content in the backlog for my Substack series The Lindahl Letter. Now I have three weeks of posts ready to go. That means that content is ready to publish until Friday, March 17, 2023. That means that a little bit of productivity this upcoming weekend will help push things back into a backlog of 5 weeks of content in review which is a good place to be overall. Writing posts for 112 straight weeks is a pretty good pattern or routine.
Recently I have considered letting the writing projects fall off my to do list, but that feeling is normally superseded by the need to get back to work. For the most part that need to get back to work will sustain the momentum needed to keep things on the right track moving forward. Part of having that 5 year writing plan is to know where to focus that attention and energy over time. A setback would be problematic, but it would be something that a recovery plan could remedy. I used to write a lot more content every day than I sit down and create at my current state.
Earlier this year I had wanted to make this blog a more personal place to drop thoughts and pull things together. That writing objective was not really achieved. For the last couple of decades this particular writing format has been more stream of consciousness than anything else and it has fundamentally lived up to its name as a functional journal. I can sit down with the intent to journal some thoughts and that writing inherently becomes more functional in nature. Last week was the first week in a long time that I sat down and wrote a missive every weekday. This week could make it two weeks in a row. That would be a good start to getting back into the groove of daily writing for the purposes of a blog.
I did login to the OpenAI ChatGPT instance yesterday and got it to write an introduction to AI ethics book one chapter at a time. The language model spit out a working title of, “AI Ethics: Navigating the Challenges and Opportunities of Artificial Intelligence.” That work product ended up being roughly 11 chapters and 25,000 words. That thought did occur to me to include in the upcoming Substack post for week 113, “Structuring an introduction to AI ethics.” I probably will have enough time on Saturday morning to edit that content down enough to record it for the podcast. That could be 30 or more minutes of content, but it might be interesting. It would have to be heavily caveated at the start that the content was edited by me, but wholesale generated by the OpenAI large language model based on the response to a series of prompts. Releasing that content directly was my first consideration, but it is probably more responsible to edit the content and provide commentary where the model was right or wrong.
Thank you for tuning in to this audio only podcast presentation. This is week 109 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Robots in the house.”
Right before sitting down to work on this post I recorded the audio for week 108. My voice has almost recovered all the way from catching that pandemic thing in December. It’s interesting that sustaining vocal quality would be one of the last things to stabilize from that distinctly unwanted experience. You have probably been able to hear that within the last few recorded podcasts. I just did not have that totality of vocal control. I might go back and record that audio segment again, but at this point it is entirely possible that won’t happen based on time and the weight of other commitments this week. I was also a little concerned when Audacity, the software I use for this recording, required an update before the recording started, but it was apparently a small one and I was able to start the recording process within a couple of minutes. It’s very possible that this missive would go out without an audio edition. You however know that is not the case.
It turned out that I was able to sit down and record the audio version of this Substack post. Right now I’m working without my backlog due to some unavoidable things that disrupted my writing routine a bit. I’m hopeful that given a little bit of time a few weeks of backlog will build back up, but right now each post is as fresh as it will ever be as things are being written directly before being recorded as part of my weekend writing routine. Typically it is better to have the posts created within the 5 week planning and review cycle. At this point, that is not possible. Welcome to the now and being in the moment as the very freshest words are brought your way within this missive of consideration for Substack.
For the most part households adopted some machines pretty quickly and in a sustained way. Depending on where you are, microwaves, refrigerators, freezers, dishwashers, washing machines, and dryers are a part of the technology footprint in households. None of those devices need to be connected to either WiFi or Bluetooth to be operational. They existed for many years without that type of connectivity. This essay happens to be about robots in the house and the aforementioned appliances are not really what people talk about in terms of modern robotics in the household. It’s something more mobile that gets a lot of attention. To be fair a lot of robot vacuum brands and companies now exist. They roam and clean, get stuck, and have to be rescued and maintained.
I went out to find some papers that referenced the Roomba and they seem to have peaked between 2006 to 2007. Which I thought was a very interesting element to see within this literature review. They just sort of stopped in frequency. Here are three of them that were well referenced.
None of these robots in the house are equipped with any conversational subroutines. Within the worlds created by science fiction writers it is not uncommon for the robots in the house to talk back and demonstrate some degree of personality. We currently have no legitimate AGI that would facilitate that type of exchange. It’s probably up next at some point. Now that both Microsoft (powered by OpenAI) and Google are trying to create chat-like interactions I’m guessing that chatting with the robots in the house will arrive as a feature. Right now in Google Scholar searches you can find almost 5,000 results for ChatGPT [1].
Links and thoughts:
Top 6 Tweets of the week:
My proposal for an architecture that reason, plan, and learn models of reality.
A broad survey of published methods to "augment" Language Models so they can reason, plan, and use tools to elaborate their answers. Tools such as search engines, calculators, code interpreters, database queries, etc, can help LLMs produce factual answers. By @MetaAI – FAIR. https://t.co/SPwVNSRPqY
It's Hard Fork Friday! This week on the show, Bing tries to break up Kevin's marriage, and @zoeschiffer makes her Hard Fork debut https://t.co/B4sR3I4BKR
Week 110: Chatbots and understanding knowledge graphs
Week 111: Natural language processing
Week 112: Autonomous vehicles
Week 113: Structuring an introduction to AI ethics
Week 114: How does confidential computing work?
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the year ahead.
This week has featured a return to the daily stream of consciousness content creation at the very start of the day. Tomorrow I’ll pivot over to working on more academic content per my normal weekend writing strategy. I’m still working in a precarious position where I need to keep up with the publishing schedule in real time. I have drafted up week 110 and with a little bit of editing it will be reading for recording tomorrow morning. That will involve setting up the audio shield and microphone. Recording is not a big part of the process; it does not add that much time to the cycle. For the most part the research and review process takes care of itself. Searching, reading, and thinking about a topic drives the content creation part of the process.
Sometimes the act of collecting all of your thoughts is enough to jump on the springboard of creativity. Other times it is just a good way to start the day. Focused and ready for the next adventure is a great place to be at as things start to move around. This missive will finish up a week where things have been published on the blog each day. That is exciting. It has been some time since that has happened. My ability to really utilize the first hour of my day on this type of writing has diminished in the last year. My finest and most productive hour is generally spent on whatever is the highest value. That unfortunately has not been this exercise, but I think that might have been a mistake on my part.
It is very rare that people take time to really try to collect all their thoughts at one time. Trying to bring focus during mediation is usually about clearing all your thoughts aside. This would be the opposite of that intent. I’m trying to crash everything together to see what happens as I refine the multitude into a stream of consciousness. It’s a good thing to attempt. For me it helps focus my thoughts into a block of prose like the one you just read. Within that focus it is the things that jump out or grab my attention that are generally the most important. Things that are so important that they can leap to the forefront of the mind are generally the ones that need attention. That rule is not always true. I frequently will jump to considering my next meal or if I should make more shots of espresso. Those jumps are not the intellectually significant ones that help push things forward.
Writing random missives on Twitter generates more views than a blog post gets at the moment. It is not even close in terms of a comparison. I just got done looking at the data and it made me wonder a little bit about the nature of the open internet. A lot of walled off gardens exist where people go to engage and are a user of the platform. Sometimes parts of the internet get extended to those gardens, but it is becoming more and more a garden of paywalled content. More and more publications that write stories either within the world of entertainment, information, or news open the door to a few free stories and then try to get users to cross over into the paywalled version of things. It’s a world of online gardens of paywalled content that is fundamentally different from a bunch of content being interconnected by really simple syndication (RSS) feeds.
This blog for example has a large bulk of content from before 2014 that is set to private. Even my collection of online missives housed in this blog format has a section that is essentially paywalled. That wall has been built for an audience of one, but that does not make it anymore real as a segmentation against the open internet. That is perhaps the point I’m getting around to as we round out the second paragraph of this thought. Building out a functional search engine for the open internet seems like something that might be a good use of my time. Instead of focusing on all the online gardens of paywalled content that users cannot really access without being a subscriber of those services it might be easier to just help people get to other sources of content. Right now the picture I have in my mind is of delivering two sets of results from curated sets of sources while denoting that one list is essentially paywalled and the other is fundamentally not.
You might be asking yourself if that is essentially my effort at building a really large RSS feed reader that is curated. I think that might very well be what I’m talking about. Building out my own personal really large RSS feed portal is probably a project that I could knock out this summer. Adding the searchable component to it would be a pretty lightweight extension. Overall the value inherent to the project would be my personal curation of the content. That would not be easily replicated as it is essentially a personally tailored news feed with some search extensibility. I may go look to see if I have a parked domain that could be used to house this project.
At the start of my writing routine the very first thing that happens is that I put the date at the top of the document. That happens within most mediums that I have used to produce content. Probably the part of that routine that still creates the most reflection on my part is the tick of the year at the start of that YYYY-MM-DD to be used explicitly as YYYYMMDD. Other date formats exist, but are never going to work for me as well as that cherished time based identifier. Within that routine the first little bit is the part that really gets to me during the start of the writing process. I have reached the point in my lifecycle where I look at the year and think how we got so far from 1999 on the timeline. For some reason the anchor point in how I put a context to the timeline is squarely placed on 1999 and the change over to the year 2000. Apparently, that is how I start to unpack the context of how I relate to bringing my experience inline with the now.
All of that consideration in the last paragraph showed up today as I pondered if it really was 2023 and how we managed to get all the way to that point in the timeline. Without question the simple act of sitting down and typing on this keyboard does not require any particular year or date. In general, I could complete that action without knowing the current date. The two things don’t have to be related in any way shape or form. It would be pretty hard within modern society to give up using the calendar. Even if I was devoted to dropping that construct from my routine it would show back up pretty quickly throughout the day as people made plans and shared upcoming events. I’m not planning to even try to give up on grounding my days in the context of what day it happens to be. I am still thinking about how 2023 showed up and it’s here.
This will be the third day in a row of posting some content to the blog. I’m sure WordPress will send me a note about being on a streak of some sort. My writing pattern used to be really consistent within the pattern of creating a stream of consciousness based pose during the week and really spending my weekend time working on more academic style writing. That type of writing pattern was really about focusing my thoughts via the act of writing for a block of time and trying to really bring my attention and focus into the thing that needed the most support in terms of time and energy. Part of that is about recognizing that we only have so much time and energy to spend and I want to focus it on the right things.
My social media strategy for the blog happens to be allowing WordPress to post a link to each missive directly on Twitter as a single Tweet. Yesterday it seemed like a good idea to post the nearly 4,000 characters of that post into a tweet right after that post. A total of 3 out of 24 viewers of the tweet expanded it to look at the longer post. For the most part I have tried to post a couple of longer tweets using that feature and the engagement has never been better due to having thousands of characters. Testing out new features is always a fun thing to do and I’m hopeful that Twtitter comes up with a bunch of them in the next few months.
Right now at the start of my day I sat down to write another page or prose and to deeply consider the nature of what is being produced. My Substack efforts are essentially being packaged up as a manuscript at the end of the year. For better or worse that writing effort is essentially a weekly way to turn out a focused chapter of content. We are in the third year of that effort and it has worked out well enough. Right now 104 weeks of content were packed up on the topic of machine learning. This year the topics under consideration have broadened from machine learning to artificial intelligence (AI) in general. Next year at this time the manuscript that will go out will have a distinct AI focus with a few other topics mixed into that series.
Some weeks will have topics that are adjacent to central topics in the AI series. A few things need to be flushed out to really dig deeper into what can be done with AI and how it will intersect with modernity. It’s that interaction of AI and modernity that deeply concerns me enough that I feel compelled to try to write about it on a regular basis. This might very well be the year that I complete my writing effort on producing a book about the intersection of technology and modernity. Getting that writing project produced and on the shelf next to me would be a true achievement of something on my writing plan.
Overall you can tell that today is a bit about dialing in some focus on that writing plan and making sure that things are going down the right path. Getting to the point where each day of writing builds toward something and is a part of that writing plan is an important piece of the puzzle. I think the idea that with a little prompt engineering this post could be compiled within seconds where it took me dozens of minutes does give me pause. Today marks the second day in a row of producing a good amount of prose at the start of the day that was not really tied to anything special. I just sat down and tried to collect my thoughts. To that end this writing session was successful and posted. Fun times.
People who are paying Twitter right now for access to Twitter Blue would be able to post the content of a weblog into a single tweet for the most part. Right now they are allowing some longer 4,000 word tweets to exist. Only the first 280 characters display in the feed, but the content exists on Twitter and could be displayed if somebody clicked on the Tweet. At one point, in the not so distant past that would have been a real measure of permanence. Something posted on Twitter could linger for years and even be read into the congressional record. Right now that reality has changed up a bit as the permanence of Twitter is less a resolute consideration of fact. Things are shaping up in ways that make Twitter as a company seem more ephemeral. At any moment, it’s entirely possible something that was valued at 44 billion dollars could be MySpace or Pets.
My guess overall is that it will be more like Yahoo and something else will show up and claim the attention of the audience. Right now the supreme court is debating the very underpinning of the internet in terms of Section 230 of Title 47 of the Communications Decency Act of 1996. That is perhaps the biggest potential change to how people communicate online since the advent of the internet as a mass communication platform. That is not hyperbole in any way shape or form. Removing Section 230 would change the way people utilize and interact with online platforms. Things may well get very interesting at some point in the not so distant future. It made me think a bit about what I should do with my online content shared on websites right now. That thought made me sit back and give permanence and the blog a bit more thought than it deserves.
Right now my oldest musing can be found on the Internet Archive Wayback Machine. For better or worse those musings were scraped and live on within that framework of online archiving. Currently, the bulk of my weblog is set to a private mode with 1,160 posts being walled off from easy access. Within that collection of walled off content are pretty much things that were written before 2014. I have considered retiring things older than a year, two years, and five years before as a natural cycle of my blog content. One of the things that happens or has happened so far is that I consider putting all that blog content into manuscript form and that idea makes me cringe. It would be a huge amount of work for a very limited payoff. To that end generally just thinking about taking that course of action is enough to stop me from ever really doing that. I’ll admit that a couple documents do exist with what would have been the corpus of content, but they never got edited. All that ended up happening was in my enthusiasm at that moment I started the process to backup the content.
This blog is actually backed up in a couple different ways right now. Offline copies of the content are exported from WordPress and kept in a few locations. Right now the whole website and database are backed up and could be restored via a snapshot method. I’m confident that any actual effort to do a restoration would be difficult and ultimately frustrating. Every time the weblog itself has been lost before it was the images that were linked from posts were the part that ended up getting ultimately destroyed. Backing up words is easier than backing up the totality of the word, image, video, and formatting structure.
It turned out that this blog post was not actually 4,000 characters long. I went over to Twitter to post it and the content counting well was not all the way full. My hope for this work this morning was to produce a completely full tweet. Ultimately, I was very close to achieving that goal without building out this last little bit of filler at the very end. If you got to this part of the content, then you probably get it and know just why this last little bit exists.
Thank you for tuning in to this audio only podcast presentation. This is week 108 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Twitter as a company probably would not happen today.”
Part of what made Twitter so interesting is the diversity of argument and the townhall nature of it being the first place things show up in the feed. I’m not sure any other company or social platform could attract the same amount of hyperactive content creation users geared at news and coverage of the moment. People are arguing and I’m sure papers will soon be arriving to describe the end of social media. This weekend I’m going to spend a bit of time reading a book by Robert Putnam of “Bowling Alone” fame called “The Upswing”.
Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. Simon and schuster.
Putnam, R. D. (2020). The upswing: How America came together a century ago and how we can do it again. Simon and Schuster.
It’s probably somewhere in the meta analysis between social capital and social media that a compelling story about why Twitter as a company would not happen today exists. During the course of this analysis you are going to receive two different lines of inquiry. First, I’ll consider the nature of Twitter and a few books related to it and silicon valley in general. Second, we will dig into some of the AI and sentiment analysis scholarly work related to that field of study to help keep the writing trajectory for the year on track.
Books have arrived to tell the stories of what happened in Silicon Valley. A lot of unlikely things happened within the borders of the space described as silicon valley. Some of them will be a part of business courses for decades to come. It truly is an interesting thing that happened where so much creativity and output happen in such a relatively small area.
Three of the books that I have enjoyed are listed below.
Bilton, N. (2014). Hatching Twitter: A true story of money, power, friendship, and betrayal. Penguin.
Frier, S. (2021). No filter: The inside story of Instagram. Simon and Schuster.
Wiener, A. (2020). Uncanny valley: A memoir. MCD.
You can zoom out a bit and grab some classic silicon valley reading like:
Isaacson, W. (2014). The innovators: How a group of inventors, hackers, geniuses and geeks created the digital revolution. Simon and Schuster.
A lot of scholars over the years have focused their attention on Twitter for a variety of purposes. You can imagine that my interest and the interest of those scholars overlap around the ideas of AI and sentiment analysis. Digital agents abound within the Twitter space and some of them are doing some type of sentiment analysis with what scholars are identifying as artificial intelligence. That second part of the equation makes me a little bit skeptical about the totality of the claims being made. We will jump right into the deep end of Google Scholar on this one anyway [1].
Papers from a search for “Sentiment analysis Twitter artificial intelligence” [2]
Kouloumpis, E., Wilson, T., & Moore, J. (2011). Twitter sentiment analysis: The good the bad and the omg!. In Proceedings of the international AAAI conference on web and social media (Vol. 5, No. 1, pp. 538-541). https://ojs.aaai.org/index.php/ICWSM/article/download/14185/14034
Ghiassi, M., Skinner, J., & Zimbra, D. (2013). Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network. Expert Systems with applications, 40(16), 6266-6282.
Giachanou, A., & Crestani, F. (2016). Like it or not: A survey of twitter sentiment analysis methods. ACM Computing Surveys (CSUR), 49(2), 1-41. https://arxiv.org/pdf/1601.06971.pdf
I had considered some evaluation of searches for both “opinion mining Twitter artificial intelligence” and “artificial intelligence analysis of public attitudes” [3][4]. It’s possible some papers from both of those searches show up later. Generally, all of that argument and content could be broken down into two camps of intelligence gathering related to advertising and general opinion mining geared at understanding sentiment. One divergent thread of research from those two would be some of the efforts to identify fake or astroturf content. You can imagine that flooding either fake or astroturf content could change the dynamic for advertising or sentiment analysis. Advertising to a community of bots is a rather poor use of scarce resources.
Links and thoughts:
Top 5 Tweets of the week:
Data on the intellectual contribution to AI from various research organizations. Some of organizations publish knowledge and open-source code for the entire world to use. Others just consume it. https://t.co/BGxTP1lkXB
Elon should focus his team on fixing the mentions so we actually can see people responding to our content instead of trying to engagement farm. I just want to use twitter to talk to people and the main FEATURE of twitter has been broken for months.
Finally, our incremental learning paper is accepted by ICLR 2023: https://t.co/Ptai0xJzsC. I do believe closed loop transcription is the most natural and basic framework for autonomous memory forming in both incremental and unsupervised settings.
It would be interesting if AI destroys Google's search business, and YouTube ends up being the most valuable thing they have left. Stranger things have happened. https://t.co/J7FtatSnDu
Week 113: Structuring an introduction to AI ethics
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the year ahead.
Thank you for tuning in to this audio only podcast presentation. This is week 107 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Highly cited AI papers.”
This week it seemed like a good idea to take a look at some of the most highly cited AI papers. Back during week 81 of this journey into writing on Substack, I took a look at some of the most highly cited ML papers [1]. I was expecting a lot more overlap, but was pleasantly surprised at the differences. One of the papers really stood out based on the total number of citations and it’s up first. Intellectually I can accept that a paper has more than 100,000 citations, but in practice that is an awful lot of references for an academic paper to have and a representation of a degree of asynchronous interaction between researchers that helps bring the intellectual space called the academy to life.
Papers with over 100,000 citations:
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. https://arxiv.org/pdf/1412.6980.pdf
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … & Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533. https://daiwk.github.io/assets/dqn.pdf
Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). PMLR. http://proceedings.mlr.press/v37/ioffe15.pdf
Papers with over 10,00 citations:
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., … & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484-489.
Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. https://arxiv.org/pdf/1609.02907.pdf
Without question these levels of citation are an indication that the works are being read and actively referenced within the scholarly community. I’m referencing the citation numbers here to give you a sense of scale when it comes to considering the AI community and how many people are researching and considering the things happening in this space. This is a very crowded and vibrant place in the academy where a lot of time and effort are going into moving things along toward building very real and deployable technology in this space. Given the sheer volume of people working in this space it’s only a matter of time before somebody will shout “Eureka!” and we see practical deployments in production which will influence our daily lives.
What would ChatGPT create?
If you were wondering what ChatGPT from OpenAI would have generated with the same prompt, then you are in luck. I had that output generated over at https://chat.openai.com/chat by issuing a prompt.
Highly Cited AI Papers: A Look at the Most Influential Research
Artificial Intelligence (AI) is a rapidly advancing field with a wealth of research being published on a regular basis. But with so many papers to choose from, it can be difficult to know which ones are the most influential and worth reading. To help you navigate the landscape, here is a list of some of the most highly cited AI papers of all time:
“A Few Useful Things to Know About Machine Learning” by Pedro Domingos (2009) – This paper provides a comprehensive overview of the key concepts and techniques used in machine learning, making it a great starting point for anyone new to the field.
“LeNet-5: The First Convolutional Neural Network” by Yann LeCun et al. (1998) – This paper describes the first convolutional neural network (CNN) architecture, which is still widely used today in image recognition and other tasks.
“ImageNet Classification with Deep Convolutional Neural Networks” by Alex Krizhevsky et al. (2012) – This paper describes the architecture and training techniques used in the winning entry of the 2012 ImageNet competition, which sparked a renewed interest in deep learning and convolutional neural networks.
“Human-level control through deep reinforcement learning” by Volodymyr Mnih et al. (2015) – This paper describes the development of the first deep reinforcement learning (RL) agent that could match or surpass human performance on a wide range of Atari games.
“Generative Adversarial Networks” by Ian Goodfellow et al. (2014) – This paper introduces the idea of generative adversarial networks (GANs), which have since become a popular technique for generating images, videos, and other types of data.
“BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” by Jacob Devlin et al. (2018) – This paper describes BERT, a pre-trained transformer-based model that has become the state-of-the-art in many natural language processing (NLP) tasks.
These papers have made a significant impact in the field of AI and are considered as the foundation for many of the current advancements in the field. Reading them can help you understand the key concepts and the latest developments in the field and also helps to keep you updated with the current trends.
It’s worth noting that this list is not exhaustive and there are many other highly cited papers in the field of AI. However, this selection of papers represents some of the most important contributions to the field and can serve as a starting point for further exploration.
Did you miss it? Francis Fukuyama will give the #ASPA2023 Donald C. Stone Lecture. Make sure you're in the audience for his insights – or, at least registered for the conference. Access to all presentations extends for two months past the end of the event!https://t.co/QZxioNL99Opic.twitter.com/2qBiiMlGw6
Having to spin out key pieces of @google’s ad-tech structure would be difficult and would certainly hurt revenue and growth potential. But it doesn't constitute "breaking up" Google, as some have described the government's goal. https://t.co/EWJBPDmJ3A
To be clear: I'm not criticizing OpenAI's work nor their claims.
I'm trying to correct a *perception* by the public & the media who see chatGPT as this incredibly new, innovative, & unique technological breakthrough that is far ahead of everyone else.
Week 108: Twitter as a company probably would not happen today
Week 109: Robots in the house
Week 110: Understanding knowledge graphs
Week 111: Natural language processing
Week 112: Autonomous vehicles
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the year ahead.