That first Substack Go event

My initial reaction to the first Substack Go event today was to appreciate that they are trying to bring people together to form communities of interest. I’m curious when they will go old school over at Substack and start keeping a list of authors by topics like a catalog. Maybe we will see the return of the blogroll. Things opened up with a Substack community organizer talking about the programming and giving a rousing welcome to everybody across the globe that was in attendance.

81 other people from all over the globe were at the kick off of the Substack Go with Katie from the 81 other people from all over the globe were at the kick off of the Substack Go event series with Katie from the community team. I was a part of the very first Tolstoy hour event. Apparently, we are being connected to a squad of other writers to work together and collaborate. I’m pretending this was some type of Hogwarts sorting hat magic. I ended up being sorted into a small group of about 5 other writers. We got dropped into a Zoom breakout room to talk outside of the large group.

We opened by sharing our names and what we write about. Obviously, I shared that The Lindahl Letter just hit one year of weekly publishing. You can find that collection of thoughts here: https://nelslindahl.substack.com/ 

Bringing a collection of writers together will yield some fairly predictable results. We talked about the following topics:

  • Writing routines
  • Editing 
  • Deciding what to write
  • Growth hacking? Building your subscriber base
  • How to keep in touch

During the event I subscribed to several different Substack newsletters. I learned about a Discord group called, “Substack Writers Unite.” Summing up this very first Substack Go event would be as easy as tagging it as an introductory meet and greet. Nothing was recorded and the conversations were decent. I’m looking forward to the next event on Friday morning. 

Trying to refocus

Overall my plan to put the smartphone down and not keep it with me all the time is working out pretty well. That vtech “Connect to Cell” system works well enough as an extended headset to my Google Pixel 5 smartphone that it is almost like a house phone. It is sort of weird to hear phone calls ringing throughout the entire house again like the days of yore when land lines were a common household feature. 

The vast majority of the application alerts and notices that I spent all day clearing out are not necessary. A lot of unnecessary attention was going to that smartphone each day and that was easy enough to stop. For the next couple of days I have planned time off that could be spent writing and working on a few things. 

Today I picked back up and worked a little bit on my week 30 Substack post. It needed a little bit of refinement and rework to be ready for Friday. Intellectually I know that I should spend a few minutes on the next few posts and get them into suitably completed drafts. Initially I was able to work ahead a little bit more than what is happening now, but for some reason that process broke down and I am just working on one week at a time. If the content being produced was real time, then that would make sense as an approach. The content is however planned out weeks in advance making it much easier to produce drafts in a queue instead of working in real time to be timely based on the news of the day. Maybe that is the key to unlocking a different type of content at some point in the future. I have considered turning the weekly Substack post into both a YouTube video and a weekly podcast. I’m actually curious what has stopped me from turning the first 30 weeks of content into multimedia formats. It is probably some type of weird nostalgia for the written newsletters of the past.

Getting really focused and locking in to write for a prolonged period of time seems to be illusive. I’m able to focus on topics and complete work, but I’m struggling with really spending hours working on the same thing. That is something that is going to need to be remedied before longer form prose and projects are going to get done. Part of that is just being able to sit and type for a sustained burst of 30 minutes without shifting around and working on different things. Even right now Rocky the dog is trying to distract me with growls at a reflection in the glass of the door. It is way before sunrise right now and nobody is stirring in the house. Right now is the time for me to write and for Rocky the dog to hang out in my office.

Actualizing my stop doing list

Earlier this week I set up a vtech “Connect to Cell” system at the house. It basically connects to my smartphone via Bluetooth pretending to be a headset that is always connected. With that simple connection it is ready to answer calls and it rings at three different base stations throughout the house when I’m home with my smartphone. Setting up this system was pretty simple. It allows me to treat my cell phone like a home phone and leave it on the charging stand in my office. Part of this endeavor is to try to avoid touching my smartphone for a longer period of time during the day. Checking the alerts and notifications on my phone dozens of times a day is not really a productive thing to do with my time and energy. It is something that I’m trying very hard to put on my stop doing list. The power of the stop doing list is in how it frees up your energy and effort to work on the to do part of the list. 

Every morning I wake up before the sunrise and try to focus all of my attention and efforts at the start of the day to the act of writing. Being a writer demands some type of routine that actively directs your energy towards the production prose. That is how my daily routine works. I focus all of my energy without any distractions on the act of writing. Sometimes a little bit of research or other activities creep into the mix, but for the most part the simple act of dancing on the keyboard is what happens and it really is the essence of what should happen. Thoughts are converted from that present point of view into keystrokes. Ideally that would happen for several pages of prose creation at a time, but for some reason it seems to end up being something that happens in about a single page serving at a time. During the course of writing and focusing on the idea at hand something will inevitably pull me out of the typing and creation process and that shift will cause a breakdown in further prose creation. It’s amazing how powerful shifting your focus can be at any given time. 

Generally in the background either YouTube or Pandora is playing something that occupies a little bit of my attention. Just enough of my attention to help keep me in the pocket of writing, but not enough to totally grab my attention away from the task at hand. Last week I pulled apart the Google Doc that houses all my Substack posts from “The Lindahl Letter” and started to convert it into a Microsoft Word document capable of being published as a book. This time around for that effort I landed on using a paper size of A5. That seemed like a good size to format the content into for this journey. Today I just finished work on the content that will go out on Friday titled, “Substack Week 30: Integrations and your ML layer.” I’m going to have to remove the links and Tweets sections of each post to make the content more inline with a traditional paper bound publication. Part of the joy of a newsletter format is that the content can include live links as the delivery mechanism goes to phones and computers where people can interact with the content and open links. A more traditional manuscript is not geared toward that level of interaction. It is something that will generally be read from start to finish without a bunch of outbound links to videos or other content. 

Oh yeah — I need to circle back to the process of writing weekly missives in “The Lindahl Letter” newsletter and how that will end up being a book. I have 52 topics selected and queued up as part of the writing process. That means based on the previous shared information the project currently stands at 30 of 52 chapters being completed. Working with that content to edit, refine, and rework it to be a great start to finish read is going to require moving the content from the Google Doc each week into a more manuscript friendly format. During the course of that process I’m also going to need to really focus on reworking and expanding some of the content to be more academic instead of the purely conversational tone of the weekly newsletter. I’m not going to remove all the personal touches and invocations of personality as that would make the final product less appealing to a reader, but some rework is going to be necessary to make the final product more polished. 

I’m not entirely sure at the moment where that manuscript is going to end up getting published. It is pretty easy to publish an eBook out in the market. I know how to do that without any assistance from a publisher or a literary agent. 

My attention shifted to working on a novel called “Else” that was started back in 2018. Right now the novel was really the length of a short story and it stopped after a couple thousand words. It’s weird to read something that was written years ago and pick up the writing style and tone. You have to be in the right mood to make something like that work out. It will probably be easy enough for somebody to figure out exactly what chapter the previous effort ended on and what chapter I picked up writing today. This is one of those stories that is going to be written from start to finish and then edited.

Substack Week 4: Have an ML strategy… revisited

The post for week 4 is now up and live.

Welcome to the 4th post in this ongoing Substack series. This is the post where I’m going to go back and revisit two very important machine learning questions. First, I’ll take a look back at my answers to the question, “What exactly is an ML strategy?” Second, that will set the foundation to really dig in and answer a question about, “Do you even need an ML strategy?” Obviously, the answer to the question is a hard yes and you know that without question or hesitation. 

1. What exactly is an ML strategy?

As you start to sit down and begin the adventure that is linking budget line items to your machine learning strategy it will become very clear that some decisions have to be made.[1] That is where you will find that your machine learning strategy has to be clearly defined and based on use cases with solid return on investment. Otherwise your key performance indicators that are directly tied back to those budget line items are going to show performance problems. Being planful helps make sure things work out. 

Over the last couple of weeks this Substack series “The Lindahl Letter” has dug into various topics including machine learning talent, machine learning pipelines, machine learning frameworks, and of course return on investment modeling. Now (like right now) it is time to dig into your ML strategy. Stop reading about it and just start figuring out how to do it. Honestly, I held off on this post until we had some foundational groundwork setup to walk around the idea conceptually and kick the tires on what your strategy might actually look like. No matter where you are in an organization from the bottom to the top you can begin to ideate and visualize what could be possible from a machine learning strategy. Maybe start with something simple like a strategy statement written in a bubble located in the middle of a piece of paper and work outward with your strategy. That can help you focus in on the part to a data driven machine learning strategy based on a planful decision-making process.[2]

Part of your machine learning strategy must be about purpose, replication, and reuse. That is going to be at the heart of getting value back for the organization. Definable and repeatable results are the groundwork to predictable machine learning engagements. Machine learning is typically applied in production systems as part of a definable and repeatable process. That is how you get quality and speed. You have to have guardrails in place that keep things within the confines of what is possible for that model. Outside of that you must be clear on the purpose of using machine learning to do something for your organization. That strategy statement could be as simple as locate 5 use cases where at scale machine learning techniques could be applied in a definable and repeatable way.

Maybe your strategy starts out with a budget line item investing in the development of machine learning capabilities. Investment in training happens every year and is a pretty straightforward thing to do. Now you have part of it tagged to machine learning. From that perspective you could be walking down a path where you are doing it purely for employee engagement, because the team just really wants to do something cool and wants to leverage new technology. You may find yourself in a situation where the team really wants to do it and you can make that happen. Sure, they might figure out a novel way to use that energy and engagement to produce something that aligns to the general guiding purpose of the organization. Some of that is where innovation might drive future strategy, but it is better to have your strategy drive the foundations of how innovation is occurring in the organization. A myriad of resources about strategy exist and some of them are highly targeted in the form of online courses.[3]

From a budget line item to actually being operationalized you have to apply your machine learning strategy in a uniform way based on potential return on investment. After you do that you will know you are selecting the right path for the right reasons. Then you can begin to think about replication of both the results and process across as many applications as possible. Transfer learning both in terms of models and deployments really plays into this and you will learn quickly that after you figured out how to do it with quality and speed that applying that to a suite of things can happen much quicker. That is the power of your team coming together and being able to deliver results. That is why going after 

2. Do you even need an ML strategy?

Seeing the strategy beyond trees in the random forest takes a bit of perspective. Sometimes it is easier to lock in and focus on a specific project and forget about how that project fits into a broader strategy. Having a targeted focused ML strategy that is applied from the top down can help ensure the right executive sponsorship and resources are focused on getting results. Instead of running a bunch of separate efforts that are self-incubating it might be better to have a definable and repeatable process to roll out and help ensure the same approach can be replicated in cost effective ways for the organization. That being said… of course you need an ML strategy. 

Maybe an example of a solid ML strategy might be related to a cost containment or cost saving program to help introduce assistive ML products to allow a workforce to do things quicker with fewer errors. Executing that strategy would require operationalizing it and collecting data on the processes in action to track, measure and ensure positive outcomes.

Footnotes:

[1] Check out this article from February 2020 about KPIs and budgets https://hbr.org/2020/02/create-kpis-that-reflect-your-strategic-priorities 

[2] Interesting blog post from AWS https://aws.amazon.com/blogs/machine-learning/developing-a-business-strategy-by-combining-machine-learning-with-sensitivity-analysis/ 

[3] Here is an example of a course lecture you can freely watch right now https://www.coursera.org/lecture/deep-learning-business/2-2-business-strategy-with-machine-learning-deep-learning-0Jop8 

What’s next for The Lindahl Letter?

  • Week 5: Let your ROI drive a fact-based decision-making process
  • Week 6: Understand the ongoing cost and success criteria as part of your ML strategy
  • Week 7: Plan to grow based on successful ROI
  • Week 8: Is the ML we need everywhere now? 
  • Week 9: What is ML scale? The where and the when of ML usage
  • Week 10: Valuing ML use cases based on scale
  • Week 11: Model extensibility for few shot GPT-2
  • Week 12: Confounding within multiple ML model deployments
  • Week 13: Building out your ML Ops 
  • Week 14: My Ai4 Healthcare NYC 2019 talk revisited
  • Week 15: What are people really doing with machine learning?

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend. 

My second Substack post went live

Well over at https://nelslindahl.substack.com/ my next post just went live today. 

Substack Week 2: Machine Learning Frameworks & Pipelines
Enter title… Machine Learning Frameworks & Pipelines
Enter subtitle… This is the nuts and bolts of the how in the machine learning equation

Ecosystems are beginning to develop related to machine learning pipelines. Different platforms are building out different methods to manage the machine learning frameworks and pipelines they support. Now is the time to get that effort going. You can go build out an easy to manage end to end method for feeding model updates to production. If you stopped reading for a moment and actually went and started doing research or spinning things up, then you probably ended up using a TensorFlow Serving instance you installed, Amazon SageMaker pipeline, or an Azure machine learning pipeline.[1] Any of those methods will get you up and running. They have communities of practice that can provide support.[2] That is to say the road you are traveling has been used before and used at scale. The path toward using machine learning frameworks and pipelines is pretty clearly established. People are doing that right now. They are building things for fun. They have things in production. At the same time all that is occurring in the wild, a ton of orchestration and pipeline management companies are jumping out into the forefront of things right now in the business world.[3]  

Get going. One way to get going very quickly and start to really think about how to make this happen is to go and download TensorFlow Extended (TFX) from Github as your pipeline platform on your own hardware or some type of cloud instance.[4] You can just as easily go cloud native and build out your technology without boxes in your datacenter or at your desk. You could spin up on GCP, Azure, or AWS without any real friction against realizing your dream. Some of your folks might just set up local versions of these things to mess around and do some development along the way. 

Build models. You could of course buy a model.[5] Steps exist to help you build a model. All of the machine learning pipeline setup steps are rather academic without models that utilize the entire apparatus. One way to introduce machine learning to the relevant workflow based on your use case is to just integrate with an API to make things happen without having to set up frameworks and pipelines. That is one way to go about it and for some things it makes a lot of sense. For other machine learning efforts complexity will preclude using an out of the box solution that has a callable API. You would be surprised at how many complex APIs are being offered these days, but they do not provide comprehensive coverage for all use cases.[6] 

What are you going to do with all those models? You are going to need to save them for serving. Getting setup with a solid framework and machine learning pipeline is all about serving up those models within workflows that fulfill use cases with defined and predictable return on investment models. 

From the point you implement it is going to be a race against time at that point to figure out when those models from the marketplace suffer an efficiency drop and some type of adjustment is required. You have to understand the potential model degradation and calculate at what point you have to shut down the effort due to return on investment conditions being violated.[7] That might sound a little bit hard, but if your model efficiency degrades to the point that financial outcomes are being negatively impacted you will want to know how to flip the off switch and you might be wondering why that switch was not automated. 

Along the way some type of adjustment to a model or parameters is going to be required. I have talked about this before at length, but just to recap here the way I look at return on investment is pretty straightforward based on the value of the initial ML model minus the initial value of the model and the final value minus the initial value divided by the cost of investment times 100%. Yeah that was a lot to read, but it’s just going to give you a positive or negative look at whether that return on investment is going to be there for you. At that point you are just following your strategy and thinking about the return on investment model.

So again strict return on investment modeling may not be the method that you want to use. I would caution against working for long periods without understanding the financial consequences. At scale, you can very quickly create breakdowns and other problems within a machine learning use case. It could even go so far that you may not find it worthwhile for your business case. Inserting machine learning into a workflow might not be the right thing to do and that is why calculating results and making fact based decisions is so important. 

Really any way you do it in a planful way that’s definable and repeatable is gonna work out great. That is fairly easy to say given that inserting fact based decision making and being willing to hit the off switch if necessary help prevent runway problems from becoming existential threats to the business. So having a machine learning strategy, doing things in a definable and repeatable way, and being ruthlessly fact based is kind of where I’m suggesting you go. 

Obviously, you got to take everything that I say with a grain of salt, you should know upfront that I’m a big Tensorflow enthusiast. That’s one of the reasons why I use it as my primary example, but it doesn’t mean that that’s the absolute right answer for you. It’s just the answer that I look at most frequently and always look to first before branching out to other solutions. That is always based on the use case and I avoid letting technology search for problems at all costs. You need to let the use case and the problem at hand fit the solution instead of applying solutions until it works or you give up.

At this point in the story, you are thinking about or beginning to build this out and you’re starting to get ramped up. The excitement is probably building to a crescendo of some sort. Now you need  somewhere to manage your models. You may need to imagine for a moment that you do have models. Maybe you bought them from a marketplace and you skipped training all together. It’s an exciting time and you are ready to get going. So in this example, you’re going from just building (or having recently acquired) a machine learning model to doing something. At that moment, you are probably realizing that you need to serve that model out over and over again to create an actual machine learning driven workload. Not only does that probably mean that you’re getting to manage those models, but also you are going to need to serve out different models over time. 

As you make adjustments and corrections that introduce different modeling techniques you get more advanced with what you are trying to implement. One of the things you’ll find is that even the perfect model that you had and was right where you wanted it to be when you launched is slowly waiting to betray you and your confidence in it by degrading. You have to be ready to model and evaluate performance based on your use case. That is what lets you make quality decisions about model quality and how outcomes are being impacted. 

I have a few takeaways to conclude this installment of The Lindahl Letter. You have to remember that at this point machine learning models and pipelines are pretty much democratized. You can get them. They are out in the wild. People are using them in all kinds of different ways. You can just go ahead and introduce this technology to your organization with relatively little friction.

  • I’m still amazed that this technology is freely available.
  • Frameworks are well developed and have been pressure tested at scale.
  • Yeah, people have proven it works.
  • The process has been well documented and the path is clear.
  • Pipelines and automation save time. Fewer ML team members are needed to deliver this way.
  • A lot of the first time doing this gotchas are managed away in this model based on leveraging community knowledge and practice.
  • Serving multiple models and model management is hard.
  • None of this replaces the deep work required to wrangle the data.

Footnotes:

[1] Links to the referenced ML pipelines: https://www.tensorflow.org/tfx, https://aws.amazon.com/sagemaker/pipelines/, or https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines
[2] One of the best places to start to learn about machine learning communities would be https://www.kaggle.com/
[3] Read this if you have a few minutes… it is worth the read https://hbr.org/2020/09/how-to-win-with-machine-learning
[4] https://github.com/tensorflow/tfx
[5] This is one of the bigger ones https://aws.amazon.com/marketplace/solutions/machine-learning
[6] This is one example of services that are open for business right now https://cloud.google.com/products/ai 
[7] This is a wonderful site and this article is spot on https://towardsdatascience.com/model-drift-in-machine-learning-models-8f7e7413b563   

What’s next for The Lindahl Letter?

  • Week 3: Machine learning Teams
  • Week 4: Have an ML strategy… revisited
  • Week 5: Let your ROI drive a fact-based decision-making process
  • Week 6: Understand the ongoing cost and success criteria as part of your ML strategy
  • Week 7: Plan to grow based on successful ROI
  • Week 8: Is the ML we need everywhere now? 
  • Week 9: What is ML scale? The where and the when of ML usage
  • Week 10: Valuing ML use cases based on scale
  • Week 11: Model extensibility for few shot GPT-2
  • Week 12: Confounding within multiple ML model deployments

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend.

Working on my first Substack post

Posting on Substack seems to require three elements to be completed before submitting the post: 1) enter title, 2) enter subtitle, and 3) some type of content. They have not offered any type of content length guidance. I guess you can write as much or as little as you want. Right now I’m working on my first Substack post and have drafted the three required elements for that post. I’m going to try to keep these posts at a conversational level with a bit of depth, but that is going to require some editing. My focus is on reworking the content for the new forum. I’m going to publish the content here on the weblog and on Substack. The inside look of how the content was created and any changes or modifications will live here. It feels like Substack is a very fleeting medium that has maximum engagement at the moment of publication. We will see what happens throughout the next 12 weeks of content creation. That will either be true or some type of momentum will build up along the way.

Here are the details of my first Substack publication build out…

Enter title… Machine Learning Return On Investment (MLROI)
Enter subtitle… A brief look at how understanding ROI helps unlock ML use case success

Content: 292 words starting and ~750 words as a finished product

Be strategic with your ML (machine learning) efforts. Seriously, those 6 words should guide your next steps along the ML journey. Take a moment and let that direction (strong guidance) sink in and reflect on what it really means for your organization. You have to take a moment and work backward from building strategic value for your organization to the actual ML effort. Inside that effort you will quickly discover that operationalizing ML efforts to generate strategic value will end up relying on a solid return on investment plan. Taking actions within an organization of any kind at the scale ML is capable of engaging without understanding the potential return on investment or potential loss is highly questionable. That is why you have to be strategic with your ML efforts from start to finish.

That means you have to set up and run a machine learning strategy from the top down. Executive leaders have to understand and be invested in guiding things toward the right path (a strategic path) from the start. Make an effort to just start out with a solid strategy in the machine learning space. It might sound a lot harder than it is in practice. You don’t need a complicated center of excellence or massive investment to develop a strategy. Your strategy just needs to be linked to the budget and hopefully to a budget KPI. Every budget results in the process of spending precious funds and keeping a solid KPI around machine learning return on investment levels will help ensure your strategy ends on a strong financially based footing for years to come. All spending should translate to some key performance indicator of some type. That is how your result will let you confirm that the funding is being spent well and that solid decision making is occurring. You have to really focus and ensure that all spending is tied to that framework when you operationalize the organization’s strategic vision to be aligned financially to the budget.

That means that the machine learning strategy you are investing in has to be driven to achieve a certain return on investment tied directly to solid budget level key performance indicators. You might feel like that line has been repeated. If you noticed that repetition, then you are paying attention and well on your way to future success. That key performance indicator related tieback is only going to happen with a solid machine learning strategy in place. It has to be based on prioritizing and planning for return on investment. Your machine learning pipelines and frameworks have to be aligned toward that goal. That is ultimately the cornerstone of a solid strategic plan when it comes to implementing machine learning as part of a long term strategy.

Be ready to do things in a definable and repeatable way. Part of executing a strategy with quality is doing things in a definable and repeatable way. That is the essence of where quality comes from. You have to know what plan is being executed and focus in and support the plan in ways that make it successful at your desired run rate. In terms of deploying machine learning efforts within an enterprise you have to figure out how the technology is going to be set up and invested in and how that investment is going to translate to use cases with the right return on investment.

Know the use case instead of letting solutions chase problems. Building up technology for machine learning and then chasing use cases is a terrible way to accidentally stumble on a return on investment model that works. The better way forward is to know the use cases and have a solid strategy to apply your technology. That means finding the right ML frameworks and pipelines to support your use cases in powerful ways across the entire organization.

This is a time to be planful. Technology for machine learning is becoming more and more available and plentiful. Teams from all over the organization are probably wanting to try proof of concepts and vendors are bringing in a variety of options. Both internal and external options are really plentiful. It is an amazing time for applied machine learning. You can get into the game in a variety of ways rapidly and without a ton of effort. Getting your implementation right and having the data, pipeline, and frameworks aligned to your maximum possible results involves planning and solid execution.

Your ML strategy cannot be a back of the desk project. You have to be strategic. It has to be part of a broader strategy. You cannot let all of the proof of concepts and vendor plays drive the adoption of machine learning technology in your organization. That will mean that the overall strategic vision is not defined. It happened generally because it might have a solid return on investment and the right use case might have been selected by chance from the bottom up in the organization. That is not a planful strategy.

Know the workflow you want to augment with ML and drive beyond the buzzwords to see technology in action. You really have to know where in the workflow and what pipelines are going to enable your use cases to provide that solid return on investment.

At some point along the machine learning journey you are going to need to make some decisions…

Q: Where are you going to serve the machine learning model from?
Q: Is this your first model build and deployment?
Q: What actual deployments of model serving are being managed?
Q: Are you working on-premise for training or calling an API and model serving in your workflow?
Q: Have you elected to use a pretrained model via an external API call?
Q: Did you buy a model from a marketplace or are you buying access to a commercial API?
Q: How long before the model efficiency drops off and adjustment is required?
Q: Have you calculated where the point of no return is for model efficiency where ROI falls below break even?

What’s next for The Lindahl Letter?

Week 2: Machine Learning Frameworks & Pipelines
Week 3: Machine learning Teams
Week 4: Have an ML strategy… revisited
Week 5: Let your ROI drive a fact-based decision-making process
Week 6: Understand the ongoing cost and success criteria as part of your ML strategy
Week 7: Plan to grow based on successful ROI
Week 8: Is the ML we need everywhere now?
Week 9: What is ML scale? The where and the when of ML usage
Week 10: Valuing ML use cases based on scale
Week 11: Model extensibility for few shot GPT-2
Week 12: Confounding within multiple ML model deployments

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. 

Really digging into that content roadmap

Never forget that the gift of happiness begets more happiness. Tonight I’m spending my time working on really digging into that content roadmap that was mentioned yesterday. At the end of last year, I was working on version 12 of a talk titled, “Applied ML ROI – Understanding ML ROI from different approaches at scale.” Instead of working on the 13th version of that talk my focus has turned to working on something else titled, “The scale problem: where and when to use ML.” Outside of working on writing the bulk of that main talk for 2021 I’m focused on a few featured topics that will receive a bulk of my attention in the next few months.

1. Is the ML we need everywhere now? 
2. What is ML scale? The where and the when of ML usage
3. Valuing ML use cases based on scale
4. Model extensibility for few shot GPT-2
5. Confounding within multiple ML model deployments

That batch of fresh topics enumerated above will receive some attention this year. Over the next 7 weeks or so I’m going to work on some Substack posts based on that mythic version 12 talk mentioned above.

1. Machine Learning Return On Investment ( ML/ROI)
2. Machine Learning Frameworks & Pipelines
3. Machine learning Teams
4. Have an ML strategy…
5. Let your ROI drive a fact-based decision-making process
6. Understand the ongoing cost and success criteria as part of your ML strategy
7. Plan to grow based on successful ROI