Last night I spent some time working and thinking deeply about election engagement modeling to predict voting patterns. I’m hoping that later today the ideas in the pressure cooker of my thoughts will have evolved enough to be sharable on GitHub in the form of a Jupyter notebook. Most of the time I have thought out how I’m going to code or create something before I ever touch the keyboard. After it is all typed up and working, taking the next step of making the model shareable online in a repository will help set a date stamp to it and make it rather official. People have built some reliable and some very poor models for election prediction. It seems like now is as good as time to publicly throw my hat into the ring on this one. This notebook will include my first attempt to run a map driven model in a Jupyter notebook. That alone should be fun to figure out step by step how to load and model based on geographic data tables. Part of the fun of this exercise is learning a little more about how to use Jupyter notebooks and doing something that I would not normally spend my time doing. Right now having a few coding adventures is probably the right thing to do with my time.
Colorado has 4 major wildfires right now and the smoke from those fires has made the air quality in Denver questionable recently. You can see the statements about air quality on the official Colorado Department of Public Health & Environment website. The statements basically say that visibility and air quality have been impacted. Originally I had those previous sentences just hanging off the first paragraph. It took me a second to realize the topic had entirely changed and that a new paragraph was justified. I could probably continue to provide some supporting sentences or thoughts about the air quality right now, but you can imagine what a campfire smells like and extrapolate that to an entire region.
Let’s jump back to the key topic at hand for the day. I’m going to start learning how to use GeoPandas and Geoplot (or maybe Matplotlib) to create some sweet visualizations. I’m going to start out small using a few different examples before working up to building out a 50 state electoral college prediction visualization. It seems like it would be a good skill (or at least a fun one) to have going forward. My goal for this effort is to drop some of these examples on GitHub along the way. I always try to walk step by step through the example to ensure that it can be repeatable and that it is easy for somebody to click from step one to the last step and understand what happened. This is great for both helping other people and creating repeatability within the research effort. Having really solid Jupyter notebook documentation reduces the barrier for replication within research and that is fundamentally a healthy direction to take within academic research. Somebody could easily adapt my methods and change the data or run it again with the same data to verify things happened and worked as expected. The one probably with this method is that everything in a Jupyter notebook is like a snapshot in time. Things will change within the dependencies and at some point the notebook will have errors and start failing on some deprecated functionality. That is one of those things that is the most frustrating part of working with coding. You have to constantly rework things that were done to keep them current.
A 98-day publishing streak continues