Thinking about ML industry disruption

Last night I spent some time working on that virtual presentation for next month. Part of that work was focused on figuring out a better way to share a meaningful narrative about calculating return on investment and aligning that to machine learning endeavors. Typically trying to explain decision making using a return on investment framework is difficult enough even within the right forum of interested listeners. When you start introducing the things that machine learning can do you end up with an entirely different conversation especially within the context of calculating return on investment and ultimately the break even point. In terms of pure disruption to industries the advances being made in machine learning are getting closer to being able to be categorized that way, but for the most part that has not materialized. Autonomous driving is getting close, but the wave of disruption to the transportation industry has not occurred at this time. 

Instead of wholesale disruption you have a series of very powerful augmentations to workflows and specific tasks have been replaced by either an API call or an automated process. My argument is that you really have to know the inflection point where machine learning efforts will move from augmentation of workflows to disrupting industries. That point in time will be where the technology possibilities frontier (TPC) curve sees a radical outward shift creating disruption for those processes and technologies that now lag behind the new curve position. In the above use case the notion of a production possibilities frontier curve would be considered inadequate to describe the disruption. Advanced machine learning implementations would be able to wholesale replace the previous production elements being described or augment them to the point where at a minimum two curves would be required to describe the possibility shift. 

You can probably tell that I have been actively thinking about how machine learning will disrupt industries and create a large degree of change. I’m not evaluating any degree of social change caused by the disruption. This analysis at the moment is just about impacts to industry and business in general. Within that claim my focus has been specifically on what business use cases will be the most common and how will those use cases create disruption. I’m trying to figure out where the current bleeding edge of technology when it comes to machine learning will intersect with production use cases at scale. We are starting to see significant implementations of elements of machine learning in workflows, but those elements are augmenting task completion in workflows instead of introducing autonomous action. Maybe my expectation of seeing autonomous action is unreasonable. That could be the case right now and it would mean that my current inquiry is going to fall short of proving any type of hypothesis. Honestly, that is an acceptable proposition. It could just be that within the current technology landscape things have not progressed enough to prove a hypothesis that autonomous action from machine learning is the inflection point to industry disruption. My argument would be that defining a potential inflection point would be enough to set the groundwork for a meaningful research project that makes a contribution to the academy. A better way of saying that is that the research in question would be a significant contribution. Conducting research that does not make any type of contribution to the greater academy of knowledge curation seems like a false start. Maybe that distinction is a line in the sand that is just personally important to me based on allocating my time to something. 

The entire rest of the day is going to be devoted to working on a series of topics defined by my efforts from last night. Enough progress was made last night describing what needed to be researched that I’m about one good whiteboard session away from being able to build an outline of the best possible version of the presentation that I could give and potentially transition the transcript of that effort into an academic paper or manuscript. This is probably going to be a two step effort where a thirty minute presentation will be delivered and after that the content will need to be essentially replatformed into an academic format. I do think it is very intellectually interesting that such a significant disconnect exists between the formats for presentations and research papers. Both are created to communicate something to an audience. You could distill both types of presentation into the written word and share it that way. A respondent consuming that written word could derive meaning from it and learn something. Strangely enough the academic style research paper is structured very differently from a presentation. Generally speaking if you got up and started to read an academic paper published in a prestigious journal to an audience at a conference that was aligned to the topic you would run out of time and the goal of successfully communicating content might not be achieved. You would run out of time and your literature review and presentation of research methods would probably have eaten up your entire 30-45 minute presentation time. 


My adventure with editing video from my new Sony ZV-a camera has gotten off to a rocky start. Apparently, thanks to this dialogue box I just found out that, “Your version of PowerDIrector doesn’t support this feature.” The very basic thing I was trying to do was drag and drop a 4K video clip into the editor interface. It appears I’m going to have to download some other version of PowerDirector and see what happens. Right now PowerDirector 365 is downloading. During the installation process I got an error that the other version of PowerDirector is running and that it must be shutdown before I can retry the installation. That was easy enough to fix. Using the Android application version was much easier to manage. This time around I had to buy the software and install some application manager which installed the software for a third time on my computer today. Yeah… that was exciting. I’m going to have to make sure that the copies replaced each other and I don’t have 3 different installation folders on my application hard drive. I use a disk separate from my storage space and the operating system to stage software installations. I’m not sure how smart of a strategy that is to employ, but at the moment it works well enough. After loading the software this time around the application did optimize for GPU encoding. That took about 2 minutes of analyzing and I’ hoping it was beneficial.   

I’m now going to paste files from C0021 to C0124 in chronological order into the new PowerDirector 365 project and start the editing process. Within my installation of Windows 10 I was able to copy all the files from the harddrive and paste them into the project at one time instead of having to do it one video at a time. This effort moved 105 files into the timeline and made the video one hour and three minutes long. We will see what happens to the runtime as the editing continues. The files were pasted into the timeline a little out of order. I ended up removing them from the timeline and adding them from the video selection panel using a cut and paste. This worked fine and clips C0021 to C0124 were in order. I’m exporting them right now into a single MP4 file. It looks like the 4K video export process with GPU acceleration will take about 45 minutes to complete. The original files were stored in an MP4 format by the Sony ZV-1 camera and took up 63.1 gigabytes on the memory card. I must have done something wrong in the export process. The file that was exported reduced the file size to 23.1 gigabytes. That means the editing process crunched out about two thirds of the data from the original files. 

That video size problem took a few minutes to figure out. My original produce file format export had used the H.265 HEVC format with a setting of MPEG-4 4k 3840 x 2160/30p (37 Mbps). Picking HEVC was not the right codec to select based on the original video format. The Sony ZV-1 was recorded using an MP4/XAVC S UHD 4K (3840 x 2160) encoding. It turns out in PowerDirector I needed to pick XAVC S and select the XAVC S 3840 x 2160/30p profile name/quality setting to better match the original content. I’ll give that an export later today to see what happens to the file size. That upload to YouTube of the unedited export in H.265 HEVC will take over an hour. I was able to do the full export using the right quality setting and it still crunched down the original files from 63.1 gigabytes to 38.9 gigabytes. The size reduction was better, but the overall size of the files still was reduced heavily. I’m curious to see if any difference is actually visible on my monitor between the two files after they are uploaded to YouTube. 

Leave a Reply

Your email address will not be published. Required fields are marked *