The Lindahl Letter

Every Friday (for the foreseeable future) I’ll be publishing a post related to #MachineLearning #ML #ArtificialIntelligence #AI and #BusinessStrategy using #Substack

The week by week The Lindahl Letter roundup:

  • Week 1: Machine Learning Return On Investment (MLROI)
  • Week 2: Machine Learning Frameworks & Pipelines
  • Week 3: Machine learning Teams
  • Week 4: Have an ML strategy… revisited
  • Week 5: Let your ROI drive a fact-based decision-making process
  • Week 6: Understand the ongoing cost and success criteria as part of your ML strategy
  • Week 7: Plan to grow based on successful ROI
  • Week 8: Is the ML we need everywhere now? 
  • Week 9: Valuing ML use cases based on scale
  • Week 10: Model extensibility for few shot GPT-2
  • Week 11: What is ML scale? The where and the when of ML usage
  • Week 12: Confounding within multiple ML model deployments
  • Week 13: Building out your ML Ops 
  • Week 14: My Ai4 Healthcare NYC 2019 talk revisited
  • Week 15: What are people really doing with machine learning?
  • Week 16: Ongoing ML cloud costs
  • Week 17: Figuring out ML readiness
  • Week 18: Could ML predict the lottery?
  • Week 19: Fear of missing out on ML
  • Week 20: The big Lindahl Letter recap edition
  • Week 21: Doing machine learning work
  • Week 22: Machine learning graphics
  • Week 23: Fairness and machine learning
  • Week 24: Evaluating machine learning
  • Week 25: Teaching kids ML
  • Week 26: Machine learning as a service
  • Week 27: The future of machine learning
  • Week 28: Machine learning certifications? 
  • Week 29: Machine learning feature selection
  • Week 30: Integrations and your ML layer
  • Week 31: Edge ML integrations
  • Week 32: Federating your ML models
  • Week 33: Where are AI investments coming from?
  • Week 34: Where are the main AI Labs? Google Brain, DeepMind, OpenAI
  • Week 35: Explainability in modern ML
  • Week 36: AIOps/MLOps: Consumption of AI Services vs. operations
  • Week 37: Reverse engineering GPT-2 or GPT-3
  • Week 38: Do most ML projects fail?
  • Week 39: Machine learning security
  • Week 40: Applied machine learning skills
  • Week 41: Machine learning and the metaverse
  • Week 42: Time crystals and machine learning
  • Week 43: Practical machine learning
  • Week 44: Machine learning salaries
  • Week 45: Prompt engineering and machine learning
  • Week 46: Machine learning and deep learning
  • Week 47: Anomaly detection and machine learning
  • Week 48: Machine learning applications revisited 
  • Week 49: Machine learning assets
  • Week 50: Is machine learning the new oil?
  • Week 51: What is scientific machine learning?
  • Week 52: That one with a machine learning post
  • Week 53: Machine learning interview questions
  • Week 54: What is a Chief AI Officer (CAIO)?
  • Week 55: Who is acquiring machine learning patents?
  • Week 56: Comparative analysis of national AI strategies
  • Week 57: How would I compose an ML syllabus?
  • Week 58: Teaching or training machine learning skills
  • Week 59: Multimodal machine learning revisited
  • Week 60: General artificial intelligence
  • Week 61: AI network platforms
  • Week 62: Touching the singularity
  • Week 63: Sentiment and consensus analysis
  • Week 64: Language models revisited
  • Week 65: Ethics in machine learning
  • Week 66: Does a digital divide in machine learning exist?
  • Week 67: Who still does ML tooling by hand?
  • Week 68: Publishing a model or selling the API?
  • Week 69: A machine learning cookbook?
  • Week 70: ML and Web3 (decentralized internet)
  • Week 71: What are the best ML newsletters?
  • Week 72: Open source machine learning security
  • Week 73: Symbolic machine learning
  • Week 74: ML content automation
  • Week 75: Is ML destroying engineering colleges?
  • Week 76: What is post theory science?
  • Week 77: What is GPT-NeoX-20B?
  • Week 78: A history of machine learning acquisitions
  • Week 79: Bayesian optimization
  • Week 80: Deep learning
  • Week 81: Classic ML algorithms
  • Week 82: Classic neural networks
  • Week 83: Neuroscience
  • Week 84: Reinforcement learning
  • Week 85: Graph neural networks
  • Week 86: Ethics (fairness, bias, privacy)
  • Week 87: Quantum computing revisited
  • Week 88: The future of publishing
  • Week 89: Understanding data quality
  • Week 90: A plumber for ML pipelines?
  • Week 91: MIT’s Twist Quantum programming language
  • Week 92: RISC-V AI chips
  • Week 93: What is probabilistic machine learning?
  • Week 94: Deep generative models
  • Week 95: Distribution shift in ML
  • Week 96: Rethinking the future of ML
  • Week 97: Shifting national AI strategies
  • Week 98: Back to ML ROI
  • Week 99: Revisiting my MLOps paper
  • Week 100: Where are large language models going?
  • Week 101: ML pracademics 
  • Week 102: Quantum machine learning
  • Week 103: Overcrowding and ML 
  • Week 104: That 2nd year of posting recap

Phase 2 – Paper topics

  • Week 105: Open source MLOps paper (from talks)
  • Week 106: eGov 50 revisited paper
  • Week 107: Local government technology budget study
  • Week 108: The fall of public space paper (could be a book)
  • Week 109: A paper on the quadrants of doing
  • Week 110: A brief look at my perspective on interns
  • Week 111: Some time of perspective on the audience size of ML and why…
  • Week 112: ML model stacking
  • Week 113: Something on reverse federation
  • Week 114: A hyperbolic look at the conjoined triangles of ML
  • Week 115: A literature review of modern polling methodology
  • Week 116: A literature study of mail vs. non-mail polling methodology in practice and study
  • Week 117: ML mesh
  • Week 118: A paper on political debt as a concept vs. technical debt