Nels Lindahl — Functional Journal

A weblog created by Dr. Nels Lindahl featuring writings and thoughts…

Month: May 2020

  • Those pesky no module errors

    Throughout the last five days I have been sprinting forward trying to really understand how to use the GPT-2 model. It took me a lot longer to dig into this one than my normal mean time to solve. Documentation on these things is challenging because of two distinct factors that increase complexity. First, the instructions are typically not purely step by step for the read. You have to have some understanding of what you are doing to be able to work the instructions to conclusion. Second, the instructions happened at a specific point in time and the dependencies, versions, and deprecations that have happened since are daunting to overcome. At the heart of the joy that Jupyter Notebooks create is the ability to do something rapidly and share it. Environmental dependencies change over time and that once working notebooks slowly drift away from being useful to being a time capsule of now perpetually failing code. That in some ways is the ephemeral nature of the open source coding world that is currently expanding. Things work in the moment, but you have to ruthlessly maintain and upgrade to stay current on the open source wave of change.  

    My argument above is not an indictment of open source and dependencies within code on versions and libraries. Things just get real over time as a code base that was once bulletproof proves to be dependent on something that was deprecated. Keep in mind that my journey to use the GPT-2 model included working with a repository that was published on GitHub just 15 months ago with very limited documentation. The file with developer instructions did not include a comprehensive environment list. I was told that this is why people build Docker containers that can be a snapshot in time deployed again and again to essentially freeze time. That is now how I work in real time or code when I’m doing things actively developing. My general use case is to sit down and work with the latest version of everything. That might not be a good idea as code is generally not assumed to be future proof. An environmental dependency file would help be a signpost for future developers to know where exactly things stood when this code base was shared via repository to GitHub. 

    Really digging into the adventure of digging into the code base for the last five days has been fun and full of learning. Digging into something for me involves opening the code up in Microsoft Visual Studio Code and trying to understand each block that was shared. The way I learned to tinker with and edit Python code was one programing debugging session at a time. I’ll admit that learning was a lot easier in a Jupyter Notebook environment. That allows you to pretty much run each section one after another and see any errors that are spit out so you can work to debug the code to get to that perfect possible future of working code. Oh that resplendent moment of working code where you move on to the next problem. It is a wonderful feeling of accomplishment to see code work. It is a supremely frustrating feeling to watch errors flood the screen or even worse to get nothing in return beyond obvious failure. Troubleshooting general failure is a lot harder than working to resolve a specific error. Right now between the two sessions of my Google Chrome browser I have maybe 70 tabs open. On reboot it is so bad that I end up having to go to browser settings, history, and recently closed to bulk reopen this massive string of browser tabs that at one point were holding my attention. 

    One of the best features I learned about in GitHub was to search for recently updated repositories. To accomplish that I searched for what I was looking for then sorted the results by last update. Based on the problems described above that type of searching was highly useful to learn the right environmental setup necessary to do the other things I wanted in a Google Colab notebook. On a side note when somebody published to GitHub using a notebook from Google Colab enough bread crumbs exist to find interesting use cases by searching for “colab” plus whatever you are looking for from the main page of GitHub. Out of pure frustration on learning how to set up the environment to get going I used searches filtered to most recently updated for “colab machine learning” and “colab gpt” to get going. Out of that frustration I learned something useful about just looking around to see what people are actively working on and taking a look at what they are actively sharing on GitHub. My searching involved looking at a lot of code repositories that did not have any stars, reviews, or interactions. As my GPT skills improve I’ll make suggestions for some of those repositories on how to get their code bases working again now that a lot of them are getting massive numbers of errors that essentially end up concluding in, “ModuleNotFoundError: No module named ‘tensorflow.contrib’.” That error is truly deflating when it appears. Given how important it is to a lot of models and code I probably would have developed handling for it in the base TensorFlow given that it was intentionally deprecated.

    My next big adventure will be to take the environmental setup necessary to get the GPT-2 model working and work out the best method to ingest my corpus of 20 years worth of my writing and see what it spits out as the next post. That has been my main focus in learning how to use this model and potentially even learning how to use the GPT-3 model that was released earlier this week by OpenAI. Part of the fun of doing this is not messing with it locally on my computer and creating a research project that cannot be reproduced. Within what I’m trying to do the fun will be releasing the Jupyter notebook and the corpus file to allow other researchers to build more complex models based on my large writing database or other researches could verify the results through reproducing the steps taking the notebook. That is the really key part here of the whole thing. Giving somebody the tools to freely reproduce the research on Google Colab without any real limitations is a positive setup forward in research quality. Observing a phenomenon and being able to describe it is great. Being able to reproduce the phenomenon being described is how scientific method can be applied to the effort.     

  • Day 5 with GPT-2

    Getting back into the groove of writing and working on things really just took a real and fun challenge to kickstart. Having a set of real work to complete always makes things a little bit easier and clearer. Instead of thinking about the possible you end up thinking about the pathing to get things done. Being focused on inflight work has been a nice change of direction. Maybe I underestimated how much a good challenge would improve my quarantine experience. Things have been a little weird since March and the quarantine came into being and it is about to be June on Monday. That is something to consider in a moment of reflection.  

    I have been actively working in the Google Colab environment and on my Windows 10 Corsair Cube to really understand the GPT-2 model. My interest in that has been pretty high the last couple of days and I have been working locally in Windows and after that became frustrating I switched over to using GCP hardware via the Google Colab environment. One of the benefits of switching over is that instead of trying to share a series of commands and some notes on what happened I can work out of a series of Jupyter notebooks. They are easy to share, download, and mostly importantly to create from scratch. The other major benefit of working in the Google Colab environment is that I can dump everything and reset the environment. Being able to share the notebook with other people is important. That allows me to actively look at and understand other methods being used.  

    One of the things that happened after working in Google Colab for a while was the inactivity timeouts made me sad. I’m not the fastest Python coder in the world. I frequently end up trying things and moving along very quickly for short bursts that are followed by longer periods of inactivity while I research an error, think about what to do next, or wonder what went wrong. Alternatively, I might be happy that something went right and that might create enough of a window that a timeout occurs. At that point, the Colab environment connection to the underlying hardware in the cloud drops off and things have to be restarted from the beginning. That is not a big deal unless you are in the middle of training something and did not have proper checkpoints saved off to preserve your efforts. I ended up subscribing to Google’s Colab Pro which has apparently faster GPUs, longer runtimes (less idle timeouts), and more memory. At the moment, the subscription costs $9.99 a month and that seems reasonable to me based on my experiences so far this week. 

    Anyway —- I was actively digging into the GPT-2 model and making good progress in Google Colab and then on May 28 the OpenAI team dropped another model called GPT-3 with a corresponding paper, “Language Models are Few-Shot Learners.” That one is different and has proven a little harder to work with at the moment. I’m slowly working on a Jupyter notebook version. 

    Git: https://github.com/openai/gpt-3
    PDF: https://arxiv.org/pdf/2005.14165.pdf

  • Day 4 with GPT-2

    Throughout the last few days I have been devoting all my spare time to learning about and working with the GPT-2 model from OpenAI. They published a paper about the model and it makes for an interesting read. The more interesting part of the equation is actually working with the model and trying to understand how it was constructed and working with all the moving parts. My first efforts were to install it locally on my Windows 10 box. Every time I do that I always think it would have been easier to manage in Ubuntu, but that would be less of a challenge. I figured giving Windows 10 a chance would be a fun part of the adventure. Giving up on Windows has been getting easier and easier. I actually ran Ubuntu Studio as my main operating system for a while with no real problems. 

    https://openai.com/blog/better-language-models/

  • Day 3 with GPT-2

    My training data set for my big GPT-2 adventure is everything published on my weblog. That includes about 20 years of content that spans. The local copy of the original Microsoft Word document with all the formatting was 217,918 kilobytes whereas the text document version dropped all the way down to 3,958 kilobytes. I did go and manually open the text document version to make sure it was still readable and structured content.

    The first problem is probably easily solved and it related to a missing module named “numpy”

    PS F:\GPT-2\gpt-2-finetuning> python encode.py nlindahl.txt nlindahl.npz
    Traceback (most recent call last):
    File “encode.py”, line 7, in
    import numpy as np
    ModuleNotFoundError: No module named ‘numpy’
    PS F:\GPT-2\gpt-2-finetuning>

    Resolving that required a simple “pip install numpy” in PowerShell. That got me all the way to line 10 in the encode.py file. Where this new error occurred:

    PS F:\GPT-2\gpt-2-finetuning> python encode.py nlindahl.txt nlindahl.npz
    Traceback (most recent call last):
    File “encode.py”, line 10, in
    from load_dataset import load_dataset
    File “F:\GPT-2\gpt-2-finetuning\load_dataset.py”, line 4, in
    import tensorflow as tf
    ModuleNotFoundError: No module named ‘tensorflow’

    Solving this one required a similar method in PowerShell “pip install –upgrade pip install https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0-py3-none-any.whl” that also included a specific path to tell is where to get TensorFlow.

    I gave up on that path and went a different route…

    https://github.com/openai/gpt-2/blob/master/DEVELOPERS.md

    and

    https://colab.research.google.com/github/ilopezfr/gpt-2/blob/master/gpt-2-playground_.ipynb

  • A second day of working with GPT-2

    Getting the GPT-2 model setup on this Windows 10 machine was not as straightforward as I had hoped it would be yesterday. Python got upgraded, Cuda got upgraded, cuDNN got installed, and some flavor of the C++ build tools got installed on this machine. Normally when I elect to work with TensorFlow I boot into an Ubuntu instance instead of trying to work with Windows. That is where I am more proficient at managing and working with installations and things. I’m also a lot more willing to destroy my Ubuntu installation and spin up another one to start whatever installation steps I was working on again from the start in a clean environment. My Windows installation here has all sorts of things installed on it and some of them were in conflict or something with my efforts to get GPT-2 running. In fairness to my efforts yesterday, I only had a very limited amount of time after work to figure it all out. Time ran out and installation had occurred via the steps on GitHub, but no magic was happening. Time ran out and that was a truly disappointing scenario to have happened. 

    Interrupted. School.

  • Predicting my next post

    Yesterday, I started looking around at all the content I have online. The only base I do not have covered is probably needing to share a new speaking engagement photo online. I need to set up a page for speaking engagements at some point with that photo and a few instructions on how best to request my engagement.  Every time I have done a speaking engagement my weblog and Twitter traffic picked up for a little bit. Using the “Print My Blog” plugin I was able to export 1,328 pages of content for a backup yesterday. My initial reaction to that was wondering how many pages of that were useful content and how much of it was muddled prose. Not only did that question of usefulness make me wonder, but also I wondered if I loaded that file into the OpenAI GPT-2 what would come out as the predicted next batch of writing. That is probably enough content to spit out something that reasonably resembles my writing. I started to wonder if the output would be more akin to my better work or my lesser work. Given that most of my writing is somewhat iterative and I build on topics and themes the GPT-2 model might very well be able to generate a weblog in my style of writing. 

    Just for fun I’m going to try to install and run that little project. When that model got released I spent a lot of time thinking about it, but did not put it to practice. Nothing would be more personal than having it generally create the same thing that I tend to generate on a daily basis. A controlled experiment would be to set it up and let it produce content each day and compare what I produce during my morning writing session to what it spits out as the predicted next batch of prose. It would have the advantage or disadvantage of being able to review 1,328 pages and predict what is coming next. My honest guess on that one is that the last 90 days are probably more informative for prediction than the last 10 years of content. However, that might not be accurate based on how the generative model works. All that content might very well help fuel the right parameters to generate that next best word selection. I had written “choice” to end that last sentence, but that felt weird to write that the GPT-2 model was making a choice so I modified the sentence to end with selection.

    Interrupted. School.

  • These are strange and different times

    Returning to form or so it goes takes a bit of effort. Any return to form without effort would inherently discount the journey. Shifting back without a bit of effort might just be acceptable right now. These are strange and different times. This will be the 9th day in a row of posting something to the weblog. That streak is starting to feel a little bit more normal. Every day my thoughts have started to get back into a more orderly form that can be turned quickly into prose. That is the key element in turning the corner and engaging in a bit of writing. Not only is clearing your mind enough to not do anything is a skill, but also allowing your stream of consciousness to spill out onto the screen as prose is also a skill. Transforming thoughts almost directly into keystrokes in an effortless way is the hallmark of being in the writing pocket and that feels like something that happens from practice. 

    This weekend I have spent a lot of time thinking about what exactly election data can tell us about the state of civil society and the general degree of civility at large. Within the world of an election the universe being examined could be all voters or it could be all people that could vote. Some of the best insights available could be about the people who take no action and choose to sit out of the election process. My first response to that investigation into that phenomenon was a simple series of thoughts about how maybe they did not know it was election day. It is entirely possible for a lot of people that government is a thing that stands separate from the routines of daily life and voting by proxy stands separately from everyday life. Certainly some places have moved to mail in ballots and have made it much easier to vote. Other places have gone the other way and made it much harder to participate in the voting process. Now we are starting to get somewhere in the analysis. Three potential reasons have jumped out: 1) being unaware, 2) easily ignored, or 3) it was very hard. That set of thoughts certainly expresses a continuum of sorts that could be expressed as some kind of Likert scale.   

    My initial analysis has started at the congressional district level. My assumption is that I can reasonably roll up my congressional based model to the state level and use a bit of a convoluted transform to get to a national outcome. Within a national election model just using the general sentiment would express a popular vote based outcome and that would not work all the time. Sometimes it would yield the correct result, but other times it might yield a false positive within a condition where having the most votes at a national level is not aligned to the outcome. That is a scenario that political scientists will be writing about for years to come. Social scientists in general will be studying that and how it influences both civility and civil society for decades. Seriously, that is not an understatement. Our beliefs in how democracy functions are a very important part of how we engage in a social contract to participate in the normative routines that allow daily life to function as well as it does. Maybe this watershed event that is occurring now will create some type of shared experience that will help people better relate to each other, strengthening the very social fabric that protects democracy. 

    I’m really starting to think that these are truly strange and different times. The lens in which we see the world and how we interact with things is changing every day as we experience a new normal way to interact with people. A new normal way to visit stores to go about the routines that allow daily life to occur. We all have to figure out how to make meals on a daily basis. Eating is a shared and common experience across all of humanity. It is one of those things that should be a commonly shared experience like voting for those that have reached a certain age. Outside of politics people generally do a lot of similar things every day. All those things could be modeled and sentiment analysis could be produced to figure out preferences based on those things. Somewhere inside of that universe of possible analysis a small slice of things exist that my research is focusing on right now. That is where my research is dialing into understanding voter sentiment and preference within elections. At this point, I’m very focused on the key factor of participation and the sentiment around why a large portion of voters are opting out of the process that literally guarantees the stability of our daily routines. 

    Given that I’m on my 9th day in a row of posting it might be a good time to mention that most of my writing is created and posted without any real editing or revision. My routine is generally to sit down and write until the writing is done and then post it online before starting a new session of writing. My time is not normally spent on the same passage of prose engaging in rework and editing to produce a perfect product. What you are reading right now is really just how the thoughts translated from my head to the keyboard. For better or worse that is generally how this weblog works and how prose is created to be published here. Some of it is grammatically correct and free of atrocious typos and some of it is very clearly not clear at all and free of errors. One of the things that I do a lot is leave out a word that otherwise brings the flow of a sentence together. Some of that is just a weird thinking and typing problem where a word gets left out of a sentence. If you want back and read it, then you would immediately notice it and fill in the missing word. Most of the time that does not impact the meaning of what is being presented it just creates a less than ideal situation for the reader who is wondering why proofreading was set aside or ignored. Please don’t wonder about it. I just elected not to spend my time editing the prose being created. Yeah —- that is questionable. 

    Sometimes I wonder if maybe every Sunday I should swing back and edit the last 7 days of work and just leave a note at the end of the post that it was edited. Most of the time that thought occurs and is discarded. You can tell that analysis about discarding editing is accurate based on scrolling back a day, a week, or even a month to see it was not implemented. For the most part that type of effort is probably not going to be an active part of my routines. If it has not taken root in the last 20 years, then it is unlikely to start happening without some real effort to change my routine. As an analog to that lack of action on the editing front, trying to figure out why citizen participation in elections has been gradually declining is probably similar. It is something I could do with a little time and effort, but I just elect not to do it over and over again. You can kind of get a feel for where my head is at the moment and what is at the forefront of my considerations as I dive into this area of analysis. 

  • A desire to know more

    Day after day my note taking intraday has been questionable. I used to be able to remember the list of things that happened every day and quickly turn those into readable prose. Adventure was enough to detail, but the day to day muddling through became harder and hard to describe. Year after year that has become more and more completely and totally true. It went from a shadow on my memory to a near wall around happenings. Taking notes is easy enough and I have both Google Recorder and Google Keep ready to keep track of anything that stands out enough to be noted down. Maybe that is the crux of the problem and that some things are not standing out and becoming memorable. Time is passing and things are being done, but those things do not meet a certain threshold to be memorable. Even being present in the moment is not enough to turn the corner on that one and set the foundation for something solid enough to report on or even something that deserves deeper consideration. 

    Interrupted. Coffee. 

    Primarily, the word processing program I’m using for writing is Google Docs. I have a subscription to the latest Microsoft Word, but I keep going back to Google Docs and that has been the case for some time. Given that I create a new word processing document every day to begin my writing journey it is easy to look at the file extensions and figure out what has been used. At this exact moment in time, I do not remember exactly why the switch happened. At the moment, it is more of a sustained habit than a data driven poignant choice of one over the other. Maybe that argument is equal to the one above about my intraday note taking. I don’t really know why my note taking or creation of things slowed and maybe it is just more of a habit and no other real value should be ascribed to it. 

    This cup of coffee is growing on me in terms of my appreciation of it as each sip gets consumed. Right now my Warren Zevon radio station on Pandora is pretty much doing everything, but playing any Zevon. It might be better to grab a record from the shelf and let something spin this morning in a more analog fashion. My record collection has now grown larger than 50 vinyl discs being stored on the shelf. That is an easy metric to know since the dust bags covering most of them came in a package of 50 sleeves. That means the last few on the shelf are abriding that first batch of dust protectors. It was probably time to plan ahead and buy some more awhile ago and maybe that is something that will happen today. It will take a little bit of hunting around to get the same brand of dust protection sleeves.

    These are indeed strange times. People are trying to figure out how to get back to the routines of the past. I’m trying to figure out how to have better and more productive routines. Most of my time over the last few years has been spent in this office where I am writing right now and working on managing my desire to know more vs. my need to create more prose. Part of that is just focusing my efforts on making contributions to society in general. My method of giving back has been contributions to the academy and conducting research. Over the years my efforts to give back and share my research methods have been sporadic and should have been better. That is easier to right down than to really accept or understand. Saying that my academic contributions should have been greater in frequency is probably the quintessential scholarly lemment. In these strange times my efforts to take one step forward at a time and work toward a trajectory of making contributions is at least easier to understand and manage. 

    This cup of coffee is now half gone and my thoughts are beginning to get a little more focused. Strangely enough it took more than a page of meandrous writing to get to this point in the daily writing journey. Sometimes getting to this point in the journey never arrives. The question really is what to do with this moment of self aware pontification. Perhaps the endless string of Bruce Springsteen songs playing will provide some council on what happens next. Some musicians have a proven history of recording album after album. That is similar to writing every day, but somewhat different. My writing efforts are typically created and exist until the next iteration. For better or worse most songs are written in a different way. They are not created from start to finish and then performed again in a slightly different way to see what happens. The iterative format is mostly replaced with editing, collaboration, and rework. Very rarely do I ever spend the time to work and rework a paragraph or section of something to that type of quality and or desired outcome. Something that is going to get published might receive that type of attention. 

    Now that I’m sitting and thinking about that last paragraph, maybe the goal for this year is to get to that point of editing on a few things. Getting to the point of publication where that type of effort will be required is probably the desired outcome. It will be a crowded field of ideas this year. My interest in elections is shared by a wide array of thinkers, researchers, and actively publishing academics. That means the models and corresponding academic publications will need to be excellent to crowd out all the other things people are trying to publish. I’m not really worried about that. My efforts are always developed from scratch and coded up to do what I want them to do making it easier to share something unique with the world. That last sentence is not a piece of prose I’m very happy with at the moment. I’m going to leave it, but this paragraph is a clear example of something that could be rewritten to be better. Perhaps a little bit of editing and rework would have made this missive way better. 

  • A bit of meandrous writing

    Over the last two days the weblog picked up a lot more traffic than usual. I’m going to attribute that to posting on a more regular basis. It did not look like the posts were being read in order. That was a very curious thing that caught my attention. All the content here shared a common writer, but the topics being covered vary widely. I updated the “About Nels” page to suggest that new readers start with my 40th birthday weblog post. That was the post where I realized that my ability to carry a narrative thread along the way with me from post to post was lacking. Telling a really good sustained story day after day is a skill. Perhaps it is a skill that I need to learn to better master along the journey of daily writing. Getting back into the swing of writing has taken some time and things have been going well enough. I’m not talking about a solid 5 single spaced pages a day of productivity, but things are getting back to normal. 

    Throughout this Memorial Day weekend I’m going to spend some time reflecting to commemorate and I’m going to work on understanding how to build infographics. My base election prediction model is pretty simplistic and should be really easy to turn into a series of graphics, but those graphics would not be interactive. My ability to build graphics is generally geared toward putting them in academic publications that are very static and not designed to be a living thing that people could tinker with and enjoy. Perhaps that is the beautiful and lasting contribution of Jupyter notebooks to the social sciences space. You can share the chart creation to others for the purposes of both replication and extension. Somebody could take and tinker with what was done to produce something interesting. That is where my time will be spent at the intersection of building out my base election prediction models in some Jupyter notebooks and working toward sharing those on GitHub along the way for others to be able to work with going forward. 

    One of my intellectual hobbies over the years has been trying to extend sentiment analysis to electoral prediction in ways using bots. Most of that effort was not micro-targeted; it was very macro level analysis based on tracking news media down to see sentiment assuming that sentiment was passed along to readers. One of the things that this last election cycle identified in the modeling is that the transitive property of sentiment has become weaker as a factor in any model and that political sentiment has become highly sticky. The assumption of sticky political sentiment creates a much different election modeling algorithm. We will see if it is effective in November. The trajectory of my academic work will be focused in this area for the rest of the year. I feel that is a solid place to put my efforts right now. Other researchers are focusing their attention on other things in this time of quarantine. This is where my attention will be focused and hopefully it will help me both learn better interactive infographic creation skills and it will help me share a project based on election modeling. I’m going to build all my forecasting and projection models from the ground up so they are easy to review, modify, and replicate.

    Interrupted. Coffee. 

    Today instead of listening to my Warren Zevon station on Pandora I decided to let a few of my YouTube subscription videos play. This does pull my attention in and out of writing in a different way than what happens during the course of only listening to music. For the most part listening to music while I write helps me focus and the music is sort of in the background and the writing is in the forefront of my attention. Watching YouTube videos tends to pull my attention from one side of the screen where the video is playing and back to the other side of the screen where I am writing. This is an entirely different setup on my Google Pixelbook Go where the screen size does not really support split screen efforts. This Dell UltraSharp 38 inch curved monitor has worked really well. The specific model I received on March 11, 2019 was the U3818DW. I’m using the built in KVM and could simply plug in my Pixelbook via USB Type-C, but my Corsair Cube works well enough for writing that is not required. 

    Whoa —- my thoughts just wondered way off topic. Given that this is a stream of consciousness based writing session that is particularly surprising. I really should be listening to music instead of watching videos about guitars and traveling. If you were wondering about my YouTube journey, then let me explain it for you. It pretty much falls into three categories: 1) technology related things, 2) guitar gear, and 3) travel content.    

  • Muddling iterations

    This new writing strategy of spending the start of my day working on a weblog post seems to be working. Initially the data seems somewhat mixed based on the variation of size between the posts, but every day had one. Writing every day and sustaining that practice is the key to this endeavor being successful. Coming out of this quarantine with some solid perspective and maybe learning something would be good. It was for most folks and I’m sure this happened to other people. No words would appear. My will to write was gone. Every spark of creativity had burned out and all that remained was enough strength to do the things that needed to be done on a daily basis. What ended up getting left behind was my daily writing routine. Some type of after action review is going to be needed on that one to prevent that sort of thing from happening again. It was not a very good experience. My method of advancing thoughts is to write them down and iterate. Perhaps the best way to say it is that all of this is a series of muddling iterations. None of it included much science to it or even a definable and repeatable routine. 

    Today happens to be Friday. This weekend happens to be Memorial Day weekend. Now is a weird time to be able to go outside and properly commemorate Memorial Day. Well —- appreciation of that hit the forefront of my thoughts and held on for just a minute. It took me just a second to get back into the writing groove. I started to really think about the strangeness of the times right now. Retrospective considerations of how we got here are important parts of piecing together an understanding of the now and the path forward. My Pandora internet radio station is streaming my Warren Zevon Radio station. Typically on Memorial Day weekend I have demurred from Zevon to Bruce Spingsteen. Maybe that is a logical move or maybe it is just something that I have done. We will do our best to actively commemorate Memorial Day. 

    Rewriting that last paragraph would probably be a good idea, but I’m going to let it stand. Now is not the time to second guess the creation of any prose. Maybe later after things are back on track to a high output productivity based daily writing routine. Right now it is better to press forward and engage in some writing until I have content that a few cycles of iteration are possible. Sometimes the simple act of typing on the keyboard creates a writing rhythm. That happens as the act of thinking and typing cross together into something like thinking out loud. Over the years of my academic training that has been an outcome of all that effort. I tend to do my deepest thinking by writing and sketching out ideas. Even the act of that muddling past the first expression to create and rethink what is benign produced is a method toward iteration. I’m trying really hard not to write the work tinkering. That seems like the exact wrong word to put on the page. Iterating on ideas to improve them is more noble and a better use of time. Simply tinkering with words on a page seems like a lesser act that might be happening right now during the creation of this paragraph. 

    My intellectual aim at the moment is to begin down the path of a trajectory that builds toward something this weekend with the time that I have available. At present, my time is being invested back into creating and working on election models. All of that content will get posted on my GitHub and shared back out for the purpose of replication by other social scientists. Perhaps that is my attempt to allow them to iterate and expand my research in unexpected ways. That is the greatest part about contributing to academics or research in general. The thing you put in may change or be used in ways that are beyond the initial creation set down to paper and shared with others. That is how things get advanced beyond the contributions of a single person. In some ways that is why academics work toward advancing things. Not only does it open the door to different possible futures, but also it is a rewarding intellectual exercise. 

    Interrupted. Work.