Thank you for tuning in to this audio only podcast presentation. This is week 107 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Highly cited AI papers.”
This week it seemed like a good idea to take a look at some of the most highly cited AI papers. Back during week 81 of this journey into writing on Substack, I took a look at some of the most highly cited ML papers [1]. I was expecting a lot more overlap, but was pleasantly surprised at the differences. One of the papers really stood out based on the total number of citations and it’s up first. Intellectually I can accept that a paper has more than 100,000 citations, but in practice that is an awful lot of references for an academic paper to have and a representation of a degree of asynchronous interaction between researchers that helps bring the intellectual space called the academy to life.
Papers with over 100,000 citations:
- Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. https://arxiv.org/pdf/1412.6980.pdf
Papers with over 50,000 citations:
- Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
Papers with over 20,000 citations:
- Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … & Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533. https://daiwk.github.io/assets/dqn.pdf
- Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). PMLR. http://proceedings.mlr.press/v37/ioffe15.pdf
Papers with over 10,00 citations:
- Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., … & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484-489.
- Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214-223). PMLR. http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a.pdf
- Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. https://arxiv.org/pdf/1609.02907.pdf
Without question these levels of citation are an indication that the works are being read and actively referenced within the scholarly community. I’m referencing the citation numbers here to give you a sense of scale when it comes to considering the AI community and how many people are researching and considering the things happening in this space. This is a very crowded and vibrant place in the academy where a lot of time and effort are going into moving things along toward building very real and deployable technology in this space. Given the sheer volume of people working in this space it’s only a matter of time before somebody will shout “Eureka!” and we see practical deployments in production which will influence our daily lives.
What would ChatGPT create?
If you were wondering what ChatGPT from OpenAI would have generated with the same prompt, then you are in luck. I had that output generated over at https://chat.openai.com/chat by issuing a prompt.
Highly Cited AI Papers: A Look at the Most Influential Research
Artificial Intelligence (AI) is a rapidly advancing field with a wealth of research being published on a regular basis. But with so many papers to choose from, it can be difficult to know which ones are the most influential and worth reading. To help you navigate the landscape, here is a list of some of the most highly cited AI papers of all time:
- “A Few Useful Things to Know About Machine Learning” by Pedro Domingos (2009) – This paper provides a comprehensive overview of the key concepts and techniques used in machine learning, making it a great starting point for anyone new to the field.
- “LeNet-5: The First Convolutional Neural Network” by Yann LeCun et al. (1998) – This paper describes the first convolutional neural network (CNN) architecture, which is still widely used today in image recognition and other tasks.
- “ImageNet Classification with Deep Convolutional Neural Networks” by Alex Krizhevsky et al. (2012) – This paper describes the architecture and training techniques used in the winning entry of the 2012 ImageNet competition, which sparked a renewed interest in deep learning and convolutional neural networks.
- “Human-level control through deep reinforcement learning” by Volodymyr Mnih et al. (2015) – This paper describes the development of the first deep reinforcement learning (RL) agent that could match or surpass human performance on a wide range of Atari games.
- “Generative Adversarial Networks” by Ian Goodfellow et al. (2014) – This paper introduces the idea of generative adversarial networks (GANs), which have since become a popular technique for generating images, videos, and other types of data.
- “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” by Jacob Devlin et al. (2018) – This paper describes BERT, a pre-trained transformer-based model that has become the state-of-the-art in many natural language processing (NLP) tasks.
These papers have made a significant impact in the field of AI and are considered as the foundation for many of the current advancements in the field. Reading them can help you understand the key concepts and the latest developments in the field and also helps to keep you updated with the current trends.
It’s worth noting that this list is not exhaustive and there are many other highly cited papers in the field of AI. However, this selection of papers represents some of the most important contributions to the field and can serve as a starting point for further exploration.
Links and thoughts:
Top 5 Tweets of the week:
Footnotes:
[1] https://nelslindahl.substack.com/p/a-machine-learning-literature-review#details
What’s next for The Lindahl Letter?
- Week 108: Twitter as a company probably would not happen today
- Week 109: Robots in the house
- Week 110: Understanding knowledge graphs
- Week 111: Natural language processing
- Week 112: Autonomous vehicles
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the year ahead.