Nels Lindahl — Functional Journal

A weblog created by Dr. Nels Lindahl featuring writings and thoughts…

Ai4 Healthcare NYC 2019

This is a recording of my blog from November 10, 2019

Tomorrow is the big day and I thought it would be a good idea to write out my speech instead of delivering it via spoken word. Sitting at the airport it seemed like a better idea than randomly talking to the fifty people sitting near me. I figured writing is comfortable and I could sit down and write the speech out within thirty minutes and have a good pass at what was going to be said. Every major speech I have given is broken into chunks. I have an outline and topics to address during the speech. The exact text changes every time and that is a combination of my learning as I go and my inability to repeat myself exactly. Reading from a teleprompter would be one way to go about it, but that is a boring way to talk to people. For me that lacks authenticity and it does not in any way reflect an interaction with the audience. It was the same with or without a crowd. It is always the same. Nothing ever changes on the teleprompter. The following words are my first real cut at trying to write out the talk on machine learning that I’m delivering these days to anybody who will listen.

——-

Dr. Nels Lindahl here. I’m a director of clinical decision systems at a fortune 10 company. Today we are going to talk about figuring out applied machine learning: Building frameworks and teams to operationalize machine learning at scale. We are going on a journey for the next 30 minutes. We will go from the start of the process to the end of the process.

That being said… Truly thinking about machine learning holistically is the hard part of threading the needle on this topic.

#1 Where does the talent come from?

Personally, I believe you can build the talent from within your organization. I made a mental note of your reactions to that claim. Throughout my career I have a proven track record of helping grow internal talent. One of my proudest accomplishments is seeing somebody get promoted. Go out and start building out the toolkits of the people that you have. Really take the time to invest in them growing and developing as teammates and as individual contributors.

We are in the golden age of learning about machine learning. More training than you can possibly consume now exists online. It exists in a variety of different forms. One of my favorites is the online labs that are now available online. The one I have used the most is called Coursera. People have built out well tooled examples of how to do machine learning. Not only can you read about it, but you can get into examples and kick the tires. That is the thing that has drawn me to TensorFlow since the product launched. So many people have been so generous with their knowledge, skills, and abilities. They are sharing the keys to the machine learning kingdom online in some pretty easy to access classes, lectures, and even a few certificates. I have taken well over fifty courses you can see them on my LinkedIn profile. That will show you which ones I invested my own time in completing.

Sometimes building internal teams is just not fast enough. It takes time to help internal talent develop world class skills in machine learning or anything for that matter. I recognize that is a long-term goal. That is where you have a few options to start looking for ways to supplement talent. One of those ways is to hire contractors and have them help you kickstart your endeavor. Another way is to find the right product or company to help you get going fast. Several companies are doing that right now and some of them can be impactful for your organization.

Typically, the data sources in an organization are not well indexed with clearly mapped features and associations. Even getting off the shelf data sources is a real challenge. For the most part the ones that people use were created to be used that way. Those data sets did not occur naturally in the wild. Even making custom tailored synthetic datasets can be a challenge for an organization that is trying to operationalize ML at scale. That is where using external products to manage the data and even accessing APIs requires planning and sustained dedication. That means that data going to the APIs must be consistent. Constantly changing data streams are a nightmare to manage internally or externally.

That might have been a lot to consider all in one stretch of thought, but it will all come into context the first time building a team to solve an ML problem becomes a necessity. My answer to where the talent comes from involves blending great professionals together over time to create high functioning teams. That may involve hiring in key skill sets to help supplement a team or investing in training the team if enough ramp up time exists. The shorter the amount of ramp up time the greater the need to quickly bring in external talent.

#2 How do you get the talent to work together?

Now that we talked about where the talent comes from and how to think about investing in your teams. Let’s switch gears and talk about how to get the teams to work together. This is one of those things that is much easier to talk about than to manage in practice. You can think about the mantra let your leaders lead, let your managers manager, and let your employees succeed. That works well enough when you have agile teams that self-organize and rapidly get work done. If that is where you are sitting right now, then congratulations and appreciated what you have.

Teams are about how the different players work together. I try to think about machine learning engagements as having two key pillars. First, you need to figure out who has the deep knowledge on team of the product, data, and how the data relates to the customer journey. This is either going to be obvious or hard. Sometimes these folks with the greatest institutional knowledge of the data are key SMEs that play an impactful role, or they could be buried deeper in the organization at an analyst role or maybe they moved to another role.

Second that person with deep knowledge and help them work with the machine learning expert you found. Pairing these two things together is going to be the most critical lynchpin to what you are doing. Most organizations do have data structures that were architected to work from the start in machine learning. Figuring out the right places to start. What data to label and what relates to what is really the beginning of the journey? This is one of the reasons why people with full stack machine learning skills are so important. What does that even mean? Full stack machine learning skills. I can walk into your organization and setup TensorFlow and even get the team sharing some Jupyter notebooks today. Having the right feeds, having the right machine learning hardware, having access to right production side infrastructure to swiftly move data without crushing or breaking things is where full stack skills are essential.

Maybe truly agile teams are supposed to be self-organizing, but that is probably not just going to happen the first time out the gate. Finding a common or shared purpose sounds a lot easier than it really is in practice. Getting people to self-organize around that common or shared purpose probably requires some type of ground rules or spark.

Sometimes high functioning teams just embrace the challenge and work to knock down any barriers or obstacles they might face. Most teams do not have that level of dedication, persistence, or fortitude. Typically, the project needs or just a general business problem brings a group together to take some type of action. Managing during those types of situations is always interesting and generally includes trying to bring people with diverse skill sets together.

That covers two types of teams you will encounter: high performing teams that are already assembled and teams that come together based on a specific business problem. Outside of those two common scenarios the other type of talent situation you will face might very well be a solution chasing problem. It happens now more than ever when the market is saturated with open source projects that let people jump in and start working with complex tools. The next step in that pattern is wanting to do something with that new and exciting tooling. To that end, you may find a solution just waiting for a problem to tackle. However, it might not be the right solution or even remotely close to the course of action that should be taken.

Getting talent to work together for me revolves around the business problem and what the team is trying to achieve. It is hard to rally around an end goal that is nebulous or otherwise pragmatic co-opted into something other than a resolution to the business problem in question.

We should probably jump in and spend a little bit of time on understanding the tooling necessary to allow the machine learning expert to work with the team in a productive way. You can probably tell by now that my preference is for using something robust like TensorFlow to dig in and start doing machine learning at scale. You could just start out with log files and dig in with an off the shelf product like the ML toolkit from Splunk. That is an example of a way to open the door for the team to start using a common platform to get things done. a

#3 What are these workflows and why do they matter?

Ok we talked about where the talent is going to come from and how to start thinking about getting the team to work together. The next questions should be related to the workflows that exist where machine learning could be used. I generally bucket the workflows into 4 distinct categories: streams of data, warehoused data, live transactions, and bulk/batch jobs that are running. Each one of these workflows requires a different type of machine learning approach. You might think is a stretch, but it is not. The type of effort to apply machine learning to a steaming set of data is much harder than working with static data. Just because you can spin up machine learning on a stream does not mean the model will be accurate or efficient.

Really solid trained production models that are fast and accurate are valuable. Seriously, that is where you want to be at, but it does not happen on accident. In a streaming data scenario, you must have a method to train and work on models and a method to load operational models that happens without disrupting the flow of data.

Dealing with warehouses data is easier, but sometimes you forget about speed when you are not dealing with the stream. You can allow machine learning model to take a lot more time. For example, the models use for visual processing with an automobile are a lot more speed critical than the models used on a history of major league baseball statistics. This is where the team can really grow and develop around rather static data sets that are looking to evaluate.

Live transactions are one of the more fun parts of the process. The workflow involved in a transaction is normally very clear. A stream is more fluid and less pure pattern than a transaction-based view. A video stream is always going to be more variable than a financial transaction on a website, but both have real opportunities for machine learning.

Ok we are really starting to get into some end to end thinking about how you are going to operational machine learning models at scale within your organization. You have started to think about your team. How the team is going to work together and some workflows that could be sources of data to do something with. The next step in our attack today is going to be looking at some very specific problems you could tackle with machine learning.

The patterns we need to make it work are everywhere. Now is the time to just embrace that complexity and the nature of how the digital world is interconnected creates a scenario where workflows exist all over the place. Workflows exist for all sorts of business process. Sometimes they are set up with good intentions or they just crystallize out of need. Bolting some type of ML on a workflow sounds like fun. It is one way of trying to use an advanced process to do something. It could be a recommendation system, anomaly detector, or even a pattern breaking adversarial setup. Things happen within workflows in definable and repeatable ways. That is pretty much the right recipe to jump in and work with some type of machine learning algorithm. Yeah, that might read like a solution chasing a problem and that very well could be the case, or it could be an opportunity waiting to be discovered.

Breaking down potential workflow types into buckets would include content streams, mining static content, transaction based, and bulk/batch processes. Each of these buckets includes different workflow challenges in terms of ML implementations. Obviously, mining static content is the place that a lot of teams start. It is not a moving target and time is probably on your side to dig in and figure out exactly how things are going to work. You have plenty of training time and your model can be applied and tweaked. Perhaps the opposite of that bucket is trying to engage applied ML on content streams. Your model needs to be ready and the process must move swiftly. Anything that happens within a stream must be fluid enough to allow volume to continue to flow without creating backlogs or latency. The same type of argument holds true for transaction-based workflows, but you get a little bit more headroom on a transaction basis to allow the model to complete work.

#4 What are problems you can solve with ML?

If your team is starting to dig into using machine learning models to do something then you will quickly begin to think about recommendations, detection, sorting, and assistive. You will find that machine learning in the wild is going to be all about the data and the use case. Do you have a source of data that is dependable and reliable and does that data support the thing you want to do with it? Recently, it feels like a lot of more assistive type ML algorithms are being built to help speed up workflows and reinforce processes in the workplace.

Making a recommendation is one problem that ML algorithms tackle well. Within a workflow that involves purchasing things making recommendations is a useful thing to do and can be very powerful. Really dialing in recommendations to have them be targeted, insightful, and useful can make a solid recommendation engine highly successful. Like anything else it must be turned and maintained or diminishing returns will occur. You can only recommend the same thing so many times to the same user before it becomes ignored without effort.

A ton of detection systems exist. Some of the are related to computer vision within the automotive space and a lot of them are built around working with images. These are some of the most interesting use cases out in the wild right now.

Building out awesome sorting machine learning algorithms always creates the possibility for fun. One of the best use cases for sorting machine learning must be the reduction of unwanted emails. That type of effort has almost worked too well recently.

One of the exciting use cases developing out in the wild right now happens to be related to assistive process and other forms of automation. It is amazing what can be built and utilized right now to help make things happen quicker and to take definable and repeatable processes and make them occur without effort.

#5 What exactly is an ML strategy?

Ok we talked talent, teams, and workflows and problems. Now it is time to dig into your ML strategy. I held off on this one until we had some foundational ground setup to walk here. Part of your machine learning strategy must be about purpose, replication, and reuse. Machine learning is typically applied in production systems as part of a definable and repeatable process. That is how you get quality and speed. You have guardrails in place that keep things within the confines of what is possible for that model. Outside of that you must be clear on the purpose of using machine learning to do something for your organization.

You could do it for purely employee engagement reals, because the team just really wants to do something cool. You may find yourself in a situation where the team really wants to do it and you can make that happen. Sure, they might figure out a novel way to use that energy and engagement to produce something that aligns to the general guiding purpose of the organization. Some of that is where innovation might drive future strategy, but it is better to have your strategy drive the foundations of how innovation is occurring in the organization.

Apply your philosophy in a uniform way based on potential ROI. After you do that you will know you are selecting the right path for the right reasons. Then you can begin to think about replication of both the results and process across as many applications as possible. Transfer learning really plays into this and you will learn quickly that after you figured out how to do it with quality and speed that applying that to a suite of things can happen much quicker. That is the power of your team coming together and being able to deliver results.

Seeing the strategy beyond trees in the random forest take a bit of perspective. Sometimes it is easier to lock in and focus on specific project and forget about how that project fits into a broader strategy. Having a targeted focused ML strategy that is applied from the top down can help ensure the right executive sponsorship and resources are focused on getting results. Instead of running a bunch of separate efforts that are self-incubating it might be better to have a definable and repeatable process to roll out and help ensure the same approach can be replicated in cost effective ways for the organization.

Maybe an example of a solid ML strategy might be related to a cost containment or cost saving program to help introduce assistive ML products to allow a workforce to do things quicker with fewer errors. Executing that strategy would require operationalizing it and collecting data on the processes in action to track measure and ensure positive outcomes.

#6 What do you mean by ML vectors?

Now that we have kicked the tires on machine learning strategy we need to really dig into the hard stuff. I know everything else that I have been talking about today has been leading up to this. We are really going to dig into machine learning vectors. This is the most technical part of machine learning we are going to talk about today and it can be a little hard to begin to attack. That is why we are going to start small and work our way up to the hard things. Using machine learning at scale means that you have figured out how to use the model within your technology stack and you know how and where to apply that model.

This is where vectors really come into play. If you have a stream of data and you need to call an API to an external product that verifies the images that is an important part of the vector. In this case your vector is an external approach point that allows that API to probably jump outside of your workflow and to apply a machine learning model. At this point, the model executes and the image is identified and data is passed back hopefully that is happening near real time and if you are having to send over huge images to the external source and you have transport latency and model work time to account for. Your lighting fast stream might be slowly down rapidly.

If your vector had been internal and was lightweight enough to run as the image is stored or even process in memory as the stream is occurring, you have a lot of faster speeds to usable information.

Ok — let’s try to explain this in a different way. Data arrives in a lot of different ways. The method of transport and where the data is being transported can form a vector. In this very specific use case of the word vector, within the use case I am trying to describe how ML would be applied to some type of incoming data. This could be during an API exchange, it could be an API call within a process, on an internal cloud using a trained model, reaching across to an external cloud, on premise within your datacenter, or even within an edge computing instance. At the edge, you will need to make sure your model is well trained and deployed in a lightweight way to help drive your use case without adding a bunch of computing time within what should be your fastest use case.

Figuring out the right way to apply an ML algorithm requires planning. Knowing how the application will work and where it will be is important. Knowing the vector makes a big difference on the flow of data and what can be done. In my mind that makes it foundational to thinking about how to design and deploy an ML model in production. Dealing with streaming data makes the directionality of the vector even more important to consider.

#7 What is a compendium of key performance indicators?

Based on time we can only spend a few moments on the compendium of KPIs and how important that is to the overall execution of your machine learning strategy. You must be able to present the results of machine learning within the organization in terms of return on investment and other key drivers that show criticality to the business. Honestly, you must be able to think about your machine learning dashboard as something that is so simple to consume that in 30 seconds the overall status is consumable and intuitive to understand. That is good reporting. It takes time and a lot of hard work to pull off, but it is impactful when you do get to this point in the execution. I personally begin with the end in mind as part of my strategy and while people are executing, I work to figure out how to compile the measures and how to tell the story. Sometimes the teams that are directly involved in the work do not have the time to dig in and dashboard as they are building and turning models. Getting the data sorted and ready to use alone is major milestone in your machine learning journey. Pairing that with a dashboard at the same time can feel overwhelming to a team.

Everything about figuring out how to use ML in your organization must come down to a strategic plan. You need to figure out how to visualize that strategic plan in action and that will probably involve having a compendium of key performance indicators that show the value within delivering your ML solution. Beginning with that outcome in mind is important. It helps to frame the entire solution being operationalized. At a strategic level, the things being operationalized by teams must contribute to specific budget-based outcomes. That inherently connects expenditure to benefit throughout the organization in a measurable way.

From ideation to being operationalized ML benefits must be mapped to a KPI. All those things wrapped up together in a compendium helps present a picture of what strategic planning is contributing to specific action and all that gets defined in the form of clear and understandable results. Bringing those examples together within a compendium is the right way to ensure the organization has a clear view of what is going on. Understanding how the elements of the compendium of KPIs work together is what helps people visual the strategic plan the organization is executing. It paints a very clear picture of the strategy and if that strategy is being operationalized correctly.

#8 What are some examples of ML turning the wheel?

Processes abound in the workplace. Some sets of processes come together to form a workflow. Within that workflow it might be possible for ML to help engage in a turn of the wheel. That specific application of ML could help push things forward, nudge things along, or even keep the train on the tracks if that is the desired outcome. Some of the ways I have seen ML turning the wheel include recommendations, detection, sorting, and assistive deployments. Every one of those possible turns can help add value and make ML part of ongoing strategic planning.

In the healthcare space, we are seeing more and more valuable diagnostic tools starting to show up. Models are being deployed that engage in complex detection from images. Some of those models have been highly accurate in identifying skin cancer or even checking for potential signs of atrial fibrillation. Each one of those advances helps turn the wheel just a little bit allowing automation of tasking to help push things forward. Even if these advances in the healthcare space turn out to only be assistive for physicians, they are still major contributions.


Discover more from Nels Lindahl — Functional Journal

Subscribe to get the latest posts sent to your email.

Comments

Leave a Reply

Discover more from Nels Lindahl -- Functional Journal

Subscribe now to keep reading and get access to the full archive.

Continue reading