Day 3 of the Ai4 2020 conference

The content from today was partially pulled together from notes taken throughout yesterday. Writing and adapting from notes is sometimes a little easier than tackling the blank page. That was the case today.  

I’m going to mess around with the H2O.ai product stack later today or maybe this weekend. After watching Sri Ambati talk about democratizing AI yesterday I’m curious about the inference engine they have built out. One of the things that I learned is that I want to spend more time learning about adversarial model testing. 

During the course of the day yesterday I spent about 15 minutes just unsubscribing to emails that I never asked to get in the first place. They just show up in waves full of advertising nonsense. 

One of the things that I realized yesterday during the course of listening to my own pre-recorded session was that it is sort of weird. You cannot change anything along the way. You are locked in while you listen to your own words with an audience of other people. Those other people are pretty much just listening along as well. It is sort of strange to just sit in the chat session during that whole process and be ready to just answer questions along the way. It was less stressful and gave me a better chance to answer questions in real time. 

During the second day of the 2020 Ai4 digital conference I attended a few sessions. Here is the list of those sessions:

  • General Session Keynotes
  • At some point later, I want to watch the replay of, “The Fight for Fairness and Transparency in AI,” session 
  • A Comprehensive Guide to MLOps. Planning for the New Normal in a Time of COVID
  • I attended my own session that was billed as “Figuring out applied ML: Building frameworks and teams to operationalize ML at scale” which was really my “Applied ML ROI – Understanding ML ROI from different approaches at scale” talk
  • AI Learning & Development Panel
  • I’m going to try to watch, “Data at REST? Unlocking Data from REST APIs,” later. I missed it this time around.
  • Ethical AI Digital Assistants

During the third day of the 2020 Ai4 digital conference I’m going to probably get to listen to fewer sessions, but it has been fun so far. 

Shifting some focus around

Throughout the day yesterday I attended the first day of the Ai4 2020 digital conference. With the conferences focus being on machine learning and artificial intelligence it seems like a good place to spend some of my time. All of the events during the conference are pretty much back to back and I attended as many of the virtual talks as I could with everything else going on throughout the day. I had almost expected the videos to be available on demand for all of the talks after they completed. Some of the talks on machine learning monitoring and deployment had overlapped and I wanted to go back and listen to a few of the ones I missed. They will probably all be on YouTube later so I’ll be able to circle back to a few of the videos. It should be easy enough to make a list from the conference program. Given that the program is all digital I’ll need to remember to save a PDF of the conference program. 

Looking back on yesterday I can now say it was really nice to take the day off and really focus on a few different things. That is the really nice part about attending conferences. You get to dig into topics at an expert level in short bursts. It really lets me engage in shifting some focus around. That is really the key to rallying my thoughts later on to something else. Being able to spend some time thinking about different and diverging topics helps me focus back in on other things. Sometimes gaining a little bit of perspective goes a long way to helping to understand the bigger picture surrounding something. Expertise can help focus you so far into a topic that the details obscure the bigger picture. That is one of those things you want to avoid happening. 

You might be wondering about what talks I attended yesterday during the first day of the Ai4 2020 digital conference. Really I make these lists as a record to be able to look back on later if I wanted to dig into something more. Here is the list:

  • General Session Keynotes – This 3 talk package had a few technical difficulties at the start, but they got them fixed. This is one I would go back and watch again later on YouTube. 
  • How Ambient Clinical Intelligence is Transforming the Provider-Patient Experience – This one had a really interesting use case and examples during the talk
  • Research to Prod: Getting ML into real products at Bespoke – I would like to read more about this one. I might dig around for whitepapers later
  • Monitoring Models at Scale – I have the whitepaper on this one and will be reading it later this week
  • Natural Language Processing – Challenges and Opportunities in Healthcare
  • AI & RPA for the Finance Sector
  • The State of AI in Healthcare

Normally I take better notes during conferences on the events I’m able to attend, but I was moving around the house and doing things throughout the day so this experience was mostly listening to the conference instead of sitting down in front of a keyboard where I could actively take notes. That was a really long sentence. I went back and read it again, but did not take the time to break it into three parts. That was probably an oversight on my part. I must not be sorry about it or it would have been corrected.  

Things don’t pick up again within the conference until later in the day. I have a few hours to work on other things this morning.

Closing out a few adventures

Things went well enough this week. I’m glad to have closed out my adventures with virtual presentations. Delivering a talk while sitting at my desk is a lot different than walking out onto a stage and delivering. The emotional experience is different. The energy in the room is missing. It feels a lot more like they pointed over your way at a specific time and said go. You have to be ready to deliver at that specific time and you don’t get any of the natural adrenaline from stepping out onto the stage. In my case nothing was happening in the background. It was absolutely quiet and I just had to make eye contact with the webcam and try to keep the pace constant. 

This time around I dropped notes into the PowerPoint slides at the bottom with about 10 things per slide that I wanted to make sure got covered. That meant that along the way if for some reason I needed to look down at my notes, then it was pretty easy to make that happen. One of the things I do when giving a talk is set a piece of paper on the podium with the general outline of my talk on it. That gives me an easy to view order of the stories to tell during the talk in case I need access to that reminder along the way. The point of the last two paragraphs was to try to explain that giving a talk in person is a lot different. You have a sense of the mood of the audience. You have context from the previous talks. Most of the time you know a bunch of people in the room. Delivering a talk online is totally different. Honestly, to me it felt more like recording a training session than it did delivering a talk. I could have written down the entire thing and delivered it like reading from a teleprompter. That is something that I have practiced doing and it works well enough if you are used to that tempo of delivery. 

One of the more interesting pieces of technology that I got to use during the talk was a virtual background. As part of the Zoom application settings you can open the “Virtual Background” tab and quickly choose a virtual background or upload one. In my case it was easy enough to click add the image provided by the virtual conference team. I just remembered that it was probably a good idea to go back into the Zoom settings on the application and revert the virtual background to none before the next time I attend a meeting on this desktop. I was surprised at how easy it was to turn on the virtual background and immediately disappointed at the rendering and how strangely it responded if you moved around very quickly. Maybe it was the image that was selected that made it hard for the algorithm to render the virtual background in real time or maybe it is just one of those things that works well enough to be ok, but not well enough to be awesome. That sad part about the whole virtual background being introduced is that those folks did not get to see my office as a backdrop during the talk. This is mostly disappointing due solely to the fact that I had cleaned my office and even gotten out a level to make sure the picture frames were perfectly lined up. It was probably time for me to clean up my office anyway and I’ll have to get over it being blocked out by the virtual background. 

On a side note the ability to have a virtual background is probably a really awesome feature for people who do not have a home office like the one I’m occupying at the moment. Being able to just put in a professional background is probably a great feature to introduce to help make meetings more professional no matter where you are. You could be sitting at an airport and it would look like you had a professional background. At the airport you would really have to hope your noise cancelling microphone and the software that algorithmically focuses on your voice were working seamlessly.

Working on a few things

Today is going to start off with a few thoughts about a presentation I’m working on related to machine learning. Thanks to the advent of the massive digital only online conference it looks like I’m going to be able to record the presentation ahead of time as a video file and submit it instead of delivering the talk live. I’m still planning on delivering a live talk online next month, but the other opportunity seems to allow the submission of a recorded video file. That will be a good opportunity to use my new Sony ZV-1 camera to record some highly professional looking presentation video. That camera captures way better video than my Logitech Brio webcam. Hopefully the presentation is judged by the audience based on the content of the presentation, but solid video will help keep that focus on the content instead of critiques of my ability as cinematographer or my internet bandwidth during the presentation. Watching 4 tech CEOs testify in front of Congress this week made it very clear that people do pay attention to video quality and what people have in their offices. I’m going to make sure that my video quality is better than what they managed to present. My bookshelves in the background are just full of books from my years in college. At this point, they are what they are. They are shelves full of books that represent little chapters in my educational experience. Thankfully the Sony ZV-1 camera will ensure my video quality is solid right out of the gate without a lot of effort on my part.

What I’m really curious about is if they needed one continuous video or if they wanted it edited with the slides embedded into the content. The ability to record the presentation in sections would make it a lot easier to functionally complete. Maintaining energy and focus for 5 minutes is a lot easier than trying to sustain the same presentation level energy for 30 minutes. That is something I’m going to work on this weekend and see what happens. The content is geared toward a 30 minute presentation and this video needs to be just over 20 minutes. If I’m doing my own editing and transitions between slides, then I could really do some interesting things with the video in PowerDirector 365 to ensure it is a really solid presentation. That is where my thoughts are right now and that is probably not a bad place to be this morning. My focus needs to be more geared toward producing presentations and academic papers. That is one of those things that will always be true. Every year I’m supposed to publish three academic papers in journals to keep up with my academic peers. That is a goal that I’m aware of and have fallen short of achieving. On the brighter side of things it is always possible to achieve that goal moving forward. It is never possible to achieve it retroactively so my focus needs to be on the potential of my future actions instead of dwelling on the academic consequences of my procrastination. 

Apparently, I need to make a decision today about paying for a Pandora premium subscription or letting that expense lapse. I know that is the case thanks to a giant banner at the top of my Pandora instance. Over the years I have paid for the subscription on and off and I’m not entirely sure why I go one way or the other. My overarching goal of course is to remove any subscriptions from my path as they need to be really compelling to stick around. That is a strategy that a lot of people adopt to help faster savings instead of expense on a monthly basis. Maybe next month I’ll focus on listening to the well over 50 vinyl records sitting next to me in my office instead of streaming content. That is probably one way to go that will be more interesting, but it will also be a lot more work.

Practicing a virtual presentation

For the next few days, I’m going to be practicing delivering a virtual presentation. Instead of planning on standing up on stage and speaking to a crowd this presentation is going to be delivered virtually from my office chair. This format will neutralize one of my best speaking skills: audience engagement. Reading the crowd and adapting to the emotion of the room is a lot easier when you can see the people. At a conference you get the benefit of hearing a ton of other talks and seeing which parts of a talk are going to get the best reactions. That is something that I actually spend a lot of time thinking about. I’ll spend more time and go deeper into topics that the audience might enjoy more. During the course of listening to virtual conference things always just seem more rehearsed and the direct audience reaction is more limited. Generally I just click on links to talks and let them play on one side of my monitor while working on something else. The dynamic of a virtual presentation is totally different. 

I’m working on practicing the delivery of my talk, “Demystifying Applied ML: Building Frameworks & Teams to Operationalize ML at Scale.” Within the body of that talk are three core topic areas related to ROI, ML frameworks, and teams. Right now I could hit record and deliver the talk and each of those content areas would get 5-10 minutes of coverage. The way I build out the delivery of a talk is not really based on reading from slides. I try to have a series of topics or very short taglines that sign post the content being delivered. During the course of delivering the presentation those core elements get coverage, but the exact phrasing changes every delivery of the presernation. Within a virtual presentation delivery I’m not going to be able to adapt the presentation to the audience. That probably means that practicing the delivery of a virtual presentation is going to be about delivering the best possible version of the talk. 

My practice method is usually a daily delivery cadence for 15-20 days before the talk. This is a big investment of my time given that delivering a 30 minute talk and then listening to the recording is a commitment of about an hour a day to the presentation. At this point, I’m willing to make that investment and it should help ensure the virtual presentation is delivered in a well rehearsed and cohesive way for the audience. In practice, the recording method is usually just me talking to the audio recorder on my Pixel 4 XL smartphone and then listening back to the recording. The part of the process that helps refine the talk during each iteration is listening back to the content being delivered. My preference is generally for a more extemporaneous style of presentation, but in this case I’m going to try to refine the talk as much as possible before delivering the content to a virtual audience. 

Talking for 30 straight minutes is not something that I normally do on a daily basis. Even during the course of a presentation I prefer answering questions throughout the talk and engaging in some lively debate. That type of interactive exchange is what I expect in the classroom and prefer even during the course of a presentation. I’ll be curious to see if the virtual presentation format includes a method to receive audience questions throughout the talk or if they get queued up at the end.

Presentation Topic Area: Machine Learning

Title Version 1) Figuring out applied ML: Building ROI models, repeatable frameworks, and teams to operationalize ML at scale

Title Version 2) Demystifying Applied ML: Building Frameworks & Teams to Operationalize ML at Scale

Description: Solving hard business problems requires operationalizing ML at scale. Doing that in a definable and repeatable way takes planning and practice. Understanding how to match the deep understanding of subject matter experts to the technical application of ML programs remains a real barrier to applied ML in the workplace. Understanding applied machine learning models with strong potential return on investment strategies helps make delivery a definable and repeatable process. 

3-Audience Takeaways

1. Beginning to think about the process of building machine learning ROI models
2. Setting the foundation for defining repeatable machine learning frameworks
3. Building teams to operationalize machine learning at scale

Working on a new presentation

This week my time is going to be spent on working out my new speaking presentation. It is about time to get going on that project. At the moment, the working title of that talk is, “Effective ML ROI use cases at scale.” I’m not totally sold on that title and that might be why it is taking me so long to finish this presentation. Previously back in November I gave a talk in New York City about, “Figuring out applied machine learning: Building frameworks and teams to operationalize machine learning at scale. Thinking back on that now it was a very long title for a talk and a very different time before quarantine and the pandemic. Building that presentation ended up in a roughly 5,000 word presentation that was recorded into Mp3 format for easy listening. You can find that content here:

You can find about 5,000 words and an audio recording from a previous talk by following the link…

Writing this new paper is going to include a few different exercises along the journey. To help include you in the adventure I’m going to try to describe the process before it starts. Generally, I have used two different writing strategies to build out new presentations. One of these might work for you or might need an entirely different writing strategy. First, sometimes I just sit down and write the presentation from start to finish. Previously that has happened a few times and in one solid writing session driven by the headwinds of inspiration a paper goes from start to finish in one session. You could say in that example of a writing strategy you have to wait for the spark to strike and the paper will just end up happening. Second, I will take out one of my notebooks with blank pages and sketch out the structure of the paper and then start filling out the necessary sections like building random bricks in a wall. That analogy does not work in practice, but in the world of writing you can generally work on any part of the paper. That is the power of imagination within the process. Using a little bit of imagination you don’t have to build the paper from the bottom up like setting bricks in a wall.

Seriously, I’m not even entirely sold on the current writing project. It is a work progress to be sure. Three different titles have received attention; “Effective ML ROI use cases at scale”, “Building effective ROI ML use cases”, and “ML use cases at scale with effective ROI.” At some point along the way the title could even change. Right now the structure of the presentation is probably going to center on 5 solid ML use cases and how the ROI is calculated for those examples. That is probably all it will take to round out the presentation. My best to get this done is to start a shell in Microsoft PowerPoint tonight and work to get the PowerPoint slides built out one at a time. Completing the presentation in PowerPoint will allow me to have all my thoughts lined up and ready to present. The next step in the process would be to write out the complete talk. Working on that plan will generate another roughly 5,000 word block or prose that could be easily converted into some type of academic paper. It is possible that the paper will only surround the best use case or perhaps the machine learning return on investment model itself. 

Interrupted. School.

A little bit of wondering

Oh, some wondering is happening. A little bit of thinking. A little bit of planning. My thoughts were all over the place. Last night I sat down to write some notes about attending a conference this week in California. At some point, I’m going to spend some time really digging into and writing about talking and the experience of getting to meet so many folks interested in machine learning. Instead of doing that writing at the moment, I need to spend some time working on week 2 of the “Crash Course on Python” by Google on Coursera. It feels like forever since I did week the first week of course work. This weekend will be a good opportunity to really jump in and dig into the course with the attention it deserves.

Day 3 #GlobalAI Conference Santa Clara

Greetings from Day 3 of the #GlobalAI Conference. I’m speaking at two sessions today during the conference. Earlier today I went ahead and shared the slide deck and a pre read in mp3 format on LinkedIn. 

Here are the talks I listened to today with a few notes:

Business Track: The Autonomous Pharmacy: Applying AI and ML to Medication Management Across the Care Continuum (Ken Perez) – This was an interesting way to start the day. A lot of this talk focused on targeting and adherence. Both of those targets are about helping people get and sustain care. 

General Keynote Session: The Pros and Cons of Automated Machine Learning in Healthcare (Sanjeev Kumar) – This talk really dug in and tried to address silos and data quality. Those are two things that make it very hard to use dispersed and highly inaccessible data from legacy systems. 

General Keynote Session: Google’s Journey to AI-First  (Chanchal Chatterjee) – This was a really fun talk. Everybody really enjoyed it. The 3 hour version of this talk would have been epic.

Day 2 #GlobalAI Conference Santa Clara

http://www.globalbigdataconference.com/santa-clara/4th-annual-global-artificial-intelligence-conference/schedule-121.html

My notes from yesterday were a little bit unorthodox. A lot of them were just links or a couple words to learn more about. I’m going to head over to the conference here in a little bit. At the end of my Day 2 notes I’m going to just add a little bit of commentary about the talks I attended. 

Day 2: Recap of what I learned and what talks I attended… 

Business Track: Virtualizing ML/AI and data science workloads (Michael Zimmerman) – This talk was very interesting and Michael extolled the virtues of reading the paper listed below.

https://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf

General Keynote Session: Implicit deep learning and robustness (Laurent El Ghaoui) – This presentation is really solid. When I convert my presentation into a full paper I hope the formulas are as elegantly presented as what Laurent was able to produce. This talk really set the bar for walking and presenting formulas.

General Keynote Session: Pitfalls and panacea: AI and Cybersecurity (Wayne Chung) – It turns out my USB Type-C to HDMI cable works. It got an actual field test during this presentation. This turned out to be a very interesting talk full of security content and timelines. It was highly engaging.

Technical Track:  AI methods for Formal Reasoning (Christian Szegedy) – Talked about moving AI efforts into a direction of true understanding and reasoning. I took a screenshot of a bunch of papers that are going to be added to my reading list.

Technical Track: Challenges in machine learning from model building to deployment at scale (Anupama Joshi) – This talk really dug into the ML software development life cycle and how that is managed.

Workshop: Tensorflow.js : Machine Learning In and Out of the Browser (Brian Sletten) – Talked about using machine learning at the edge. This was a longer several hour session that went into more detail. I did not take a ton of notes. This session was more hands on with the product.

Day 1: Recap of what I listened to during the day… 

Workshop: Building Real World AI Solutions (Alexander Liss & Michael Liu) – This was 4 hours of workshop related to using TensorFlow for machine learning on AWS. 

Technical Track (Finance): Group Theory, Chaos and Financial Time Series (Revant Nayar) – This is one of those presentations that needed a much better projector setup. It was hard to read the notations on the screen. I’m going to see if I can get the deck later to dig into it a little bit more. 

Technical Track (Finance):  Machine Learning In Finance (Chakri Cherukuri) – The visualizations used during this presentation were really top notch. I have been impressed with the team from Bloomberg. 

Technical Track: Image Augmentations for Semantic Segmentation and Object Detection (Vladimir Iglovikov) – This talk was sort of a pitch for using Kaggle and competing in machine learning competitions. I actually wish Vladimir had just leaned into it and really talked about what it takes to compete and how that process worked. The talk really dived into the results and not the mechanics of how those competitions occur. I’m going to spend some time learning about Kaggle on the flight home tomorrow. 

Finance Track : Data in Finance/Banking (Ryan Lee) – This talk could have gone a little bit deeper into the management and use of big data. 

Other topic…

I have decided that getting a paper accepted at NIPS is a noble pursuit. 

https://medium.com/machine-learning-in-practice/nips-accepted-papers-stats-26f124843aa0

Working on some deliverables

Routines have fallen way to a nearly endless stream of things that need to be delivered throughout the last few days. That happens from time to time. Tasks pile up and end up crushing routines. At the end of the day, I spent the last few minutes of the day working on a few things that needed to be done. One of them included putting together a little bit of content for a conference this summer.

DATAx 2020 Conference Topic Area: Machine Learning

Session Description (100 words inclusive of title):

Title: Figuring out applied ML: Building ROI models, repeatable frameworks, and teams to operationalize ML at scale.

Description: Solving the hard problems requires operationalizing ML at scale. Doing that in a definable and repeatable way takes planning and practice. Understanding how to match the deep understanding of subject matter experts to the technical application of ML programs remains a real barrier to applied ML in the workplace. Understanding applied machine learning models with strong potential return on investment strategies helps make delivery a definable and repeatable process.

Well that worked out to a total of 87 words. Maybe I should sit down and write another sentence to flush out the full 100 word quota.

3-Audience Takeaways

  1. Beginning to think about the process of building machine learning ROI models
  2. Setting the foundation for defining repeatable machine learning frameworks
  3. Building teams to operationalize machine learning at scale

Well that is the content I needed to generate before the end of the day. Tomorrow, I need to spend some time working on some new slides. That is going to take a little bit of focus. Some of that content was sketched out the other day by hand. Maybe I should have started with the end product in mind instead of some back of the napkin sketches on this one. That might have helped turn the slides into reality a little bit faster. This approach is really both delaying the final product and maybe improving it. Sometimes you have to produce a couple of drafts of something to get to the finish line. Other times you only need to sit down and write it one time to create the final product.