Every Friday (for the foreseeable future) I’ll be publishing a post related to #MachineLearning #ML #ArtificialIntelligence #AI and #BusinessStrategy using #Substack
The week by week The Lindahl Letter roundup:
- Week 1: Machine Learning Return On Investment (MLROI)
- Week 2: Machine Learning Frameworks & Pipelines
- Week 3: Machine learning Teams
- Week 4: Have an ML strategy… revisited
- Week 5: Let your ROI drive a fact-based decision-making process
- Week 6: Understand the ongoing cost and success criteria as part of your ML strategy
- Week 7: Plan to grow based on successful ROI
- Week 8: Is the ML we need everywhere now?
- Week 9: Valuing ML use cases based on scale
- Week 10: Model extensibility for few shot GPT-2
- Week 11: What is ML scale? The where and the when of ML usage
- Week 12: Confounding within multiple ML model deployments
- Week 13: Building out your ML Ops
- Week 14: My Ai4 Healthcare NYC 2019 talk revisited
- Week 15: What are people really doing with machine learning?
- Week 16: Ongoing ML cloud costs
- Week 17: Figuring out ML readiness
- Week 18: Could ML predict the lottery?
- Week 19: Fear of missing out on ML
- Week 20: The big Lindahl Letter recap edition
- Week 21: Doing machine learning work
- Week 22: Machine learning graphics
- Week 23: Fairness and machine learning
- Week 24: Evaluating machine learning
- Week 25: Teaching kids ML
- Week 26: Machine learning as a service
- Week 27: The future of machine learning
- Week 28: Machine learning certifications?
- Week 29: Machine learning feature selection
- Week 30: Integrations and your ML layer
- Week 31: Edge ML integrations
- Week 32: Federating your ML models
- Week 33: Where are AI investments coming from?
- Week 34: Where are the main AI Labs? Google Brain, DeepMind, OpenAI
- Week 35: Explainability in modern ML
- Week 36: AIOps/MLOps: Consumption of AI Services vs. operations
- Week 37: Reverse engineering GPT-2 or GPT-3
- Week 38: Do most ML projects fail?
- Week 39: Machine learning security
- Week 40: Applied machine learning skills
- Week 41: Machine learning and the metaverse
- Week 42: Time crystals and machine learning
- Week 43: Practical machine learning
- Week 44: Machine learning salaries
- Week 45: Prompt engineering and machine learning
- Week 46: Machine learning and deep learning
- Week 47: Anomaly detection and machine learning
- Week 48: Machine learning applications revisited
- Week 49: Machine learning assets
- Week 50: Is machine learning the new oil?
- Week 51: What is scientific machine learning?
- Week 52: That one with a machine learning post
- Week 53: Machine learning interview questions
- Week 54: What is a Chief AI Officer (CAIO)?
- Week 55: Who is acquiring machine learning patents?
- Week 56: Comparative analysis of national AI strategies
- Week 57: How would I compose an ML syllabus?
- Week 58: Teaching or training machine learning skills
- Week 59: Multimodal machine learning revisited
- Week 60: General artificial intelligence
- Week 61: AI network platforms
- Week 62: Touching the singularity
- Week 63: Sentiment and consensus analysis
- Week 64: Language models revisited
- Week 65: Ethics in machine learning
- Week 66: Does a digital divide in machine learning exist?
- Week 67:
Who still does ML tooling by hand?My thoughts on NFTs - Week 68: Publishing a model or selling the API?
- Week 69: A machine learning cookbook?
- Week 70: ML and Web3 (decentralized internet)
- Week 71: What are the best ML newsletters? (Machine learning and surveillance bonus issue)
- Week 72: Open source machine learning security
- Week 73: Symbolic machine learning
- Week 74: ML content automation
- Week 75: Is ML destroying engineering colleges?
- Week 76: What is post theory science?
- Week 77: Is quantum machine learning gaining momentum?
- Week 78: Trust and the future of digital photography: A zero trust image paradigm
- Week 79: Why is diffusion so popular?
- Week 80: Bayesian optimization (ML syllabus edition 1/8)
- Week 81: A machine learning literature review (ML syllabus edition 2/8)
- Week 82: ML algorithms (ML syllabus edition 3/8)
- Week 83: Machine learning Approaches (ML syllabus edition 4/8)
- Week 84: Neural networks (ML syllabus edition 5/8)
- Week 85: Neuroscience (ML syllabus edition 6/8)
- Week 86: Ethics, fairness, bias, and privacy (ML syllabus edition 7/8)
- Week 87: MLOps (ML syllabus edition 8/8)
- Week 88: The future of publishing
- Week 89: your ML model is not an AGI
- Week 90: What is probabilistic machine learning?
- Week 91: What are ensemble ML models?
- Week 92: National AI strategies revisited
- Week 93: Papers critical of ML
- Week 94: AI hardware (RISC-V AI Chips)
- Week 95: Quantum machine learning
- Week 96: Where are large language models going?
- Week 97: MIT’s Twist Quantum programming language
- Week 98: Deep generative models
- Week 99: Overcrowding and ML
- Week 100: Back to ML ROI
- Week 101: Revisiting my MLOps paper
- Week 102: ML pracademics
- Week 103: Rethinking the future of ML
- Week 104: That 2nd year of posting recap
Phase 2
- Week 105: Open source MLOps paper (from talks)
- Week 106: eGov 50 revisited paper
- Week 107: Local government technology budget study
- Week 108: The fall of public space paper (could be a book)
- Week 109: A paper on the quadrants of doing
- Week 110: A brief look at my perspective on interns
- Week 111: Some time of perspective on the audience size of ML and why…
- Week 112: ML model stacking
- Week 113: Something on reverse federation
- Week 114: A hyperbolic look at the conjoined triangles of ML
- Week 115: A literature review of modern polling methodology
- Week 116: A literature study of mail vs. non-mail polling methodology in practice and study
- Week 117: ML mesh
- Week 118: A paper on political debt as a concept vs. technical debt
- Week 119: How does confidential computing work?
- Week 120: What are AI factories?