Econometric election models

Thank you for tuning in to this audio only podcast presentation. This is week 136 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Econometric election models.”

It has been a few weeks here since we started by digging into a good Google Scholar search and you know this topic would be just the thing to help open that door [1]. My searches for academic articles are always about finding accessible literature that sits outside paywalls that is intended to be read and shared beyond strictly academic use. Sometimes that is easier than others when the topics lend themselves to active use cases instead of purely theoretical research. Most of the time these searches to find out what is happening at the edge of what is possible involve applied research. Yes, that type of reasoning would place me squarely in the pracademic camp of intellectual inquiry. 

That brief chautauqua aside, my curiosity here is how do we build out econometric election models or other model inputs to feed into large language model chat systems as prompt engineering for the purposes of training them to help either predict elections or interpret and execute the models. This could be a method for introducing extensibility or at least the application of targeted model effect to seed a potential future methodology within the prompt engineering space. As reasoning engines go it’s possible that an econometric frame could be an interesting proxy model within generative AI prompting. It’s a space worth understanding a little bit more for sure as we approach the 2024 presidential election cycle. 

I’m working on that type of effort here as we dig into econometric election models. My hypothesis here is that you can write out what you want to explain in a longer form as a potential input prompt to train a large language model. Maybe a more direct way of saying that is we are building a constitution for the model based on models and potentially proxy models then working toward extensibility and agency from introducing those models together. For me that is a very interesting space to begin to open up and kick the tires on in the next 6 months. 

Here are 6 papers from that Google Scholar search that I thought were interesting:

Mullainathan, S., & Spiess, J. (2017). Machine learning: an applied econometric approach. Journal of Economic Perspectives, 31(2), 87-106. 

Fair, R. C. (1996). Econometrics and presidential elections. Journal of Economic Perspectives, 10(3), 89-102.

Armstrong, J. S., & Graefe, A. (2011). Predicting elections from biographical information about candidates: A test of the index method. Journal of Business Research, 64(7), 699-706. 

Graefe, A., Green, K. C., & Armstrong, J. S. (2019). Accuracy gains from conservative forecasting: Tests using variations of 19 econometric models to predict 154 elections in 10 countries. Plos one, 14(1), e0209850.

Leigh, A., & Wolfers, J. (2006). Competing approaches to forecasting elections: Economic models, opinion polling and prediction markets. Economic Record, 82(258), 325-340. 

Benjamin, D. J., & Shapiro, J. M. (2009). Thin-slice forecasts of gubernatorial elections. The review of economics and statistics, 91(3), 523-536. 

Beyond those papers, I read some slides from Hal Varian on “Machine Learning and Econometrics” from January of 2014 [2]. The focus of the slide was applied to modeling human choices. Some time was spent on trying to understand the premise that the field of machine learning could benefit from econometrics. To be fair since that 2014 set of slides you don’t hear people in the machine learning space mention econometrics that often. Most people talk about Bayesian related arguments. 

On a totally separate note for this week I was really into running some of the Meta AI Llama models on my desktop locally [3]. You could go out and read about the new Code Llama which is an interesting model trained and focused on coding [4]. A ton of researchers got together and wrote a paper about this new model called, “Code Llama: Open Foundation Models for Code” [5]. That 47 page missive was shared back on August 24, 2023, and people have already started to build alternative models. It’s an interesting world in the wild wild west of generative AI these days. I really did install LM Studio on my Windows workstation and run the 7 billion parameter version of Code Llama to kick the tires [6]. It’s amazing that a model like that can run locally and that you can interact with it locally using your own high end graphics card.








What’s next for The Lindahl Letter? 

  • Week 137: Tracking political registrations
  • Week 138: Prediction markets & Time-series analysis
  • Week 139: Machine learning election models
  • Week 140: Proxy models
  • Week 141: Expert opinions

If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

Leave a Reply