Day 2 working with the OpenAI Codex Beta

Earlier this morning I did some work with the OBS application to record my efforts to mess around with the OpenAI Codex Beta. For the most part I have been working in the Codex JavaScript Sandbox asking the API to return things related to fractals and a bit of searching out some encryption related elements. The lossless video that was recorded produced about 30 gigabytes of AVI video file for five minutes of recording. That is an epic amount of data for such a short video. I’m still not entirely sure why the massive difference exists between indistinguishable quality and lossless. It really is about a 10x difference in file size between the two recording methods. Uploading that 5 minute video to YouTube took about 2 hours and the crunching that is about to happen in the background over on the YouTube side of the house will be epic. I’m going to record a few more little videos this weekend and it’s going to generate a huge amount of video data.

Day 1 working with the OpenAI Codex Beta

Welcome to day 1 of my efforts working with the OpenAI Codex Beta.

I’m starting out logged into

The first thing I noticed is that my interface is a little different from what I watched on Machine Learning Street Talk

Tim and Yannic are working with the Codex JavaScript Sandbox. My beta dashboard only takes me to the Playground area where you can experiment with the API. 

Well a couple quick Google searches on that one and it was user error on my part that kept me away from the sandbox. I did not know enough to go directly to the sandbox:

I downloaded a copy of “The Declaration of Independence” and saved it as a PDF on my desktop. My big plan for tonight is to make an encryption application and have it encrypt that file from my desktop. It’s not a super ambitious plan, but I think it is a good place to start.

Thinking about that Codex demo

Yesterday, I watched the OpenAi Codex demo on my living room television. It was potentially that important of a demo of a new technology. It is almost an implementation of conversational coding. Watching a generative interactive code interpolation based language model work during a demo was super interesting. That technology could very well be the future of coding inactions with a computing interface. I’m really worried about the ethical considerations of such a technology. That was the first thing that came to mind after the realization of what they had built. The OpenAI team should be working diligently on an ethical boundaries module. That course of action should probably be a top priority for that entire group. It is entirely possible that right now  Boston Dynamics dog has a working machine learning Spot API and this could functionally teach and command it. Consider for the moment, “What are the ethical boundaries of how the Open AI Codex should be used to issue programming and commands using the Spot API?” To me that is a very real concern and something that should be deeply considered before more complex robotic implementations are built out and operationalized. 

Today is one of those days where the words are not flowing for some reason. I’m sitting in front of the screen wondering about what is next, but nothing is really happening. At this point in the writing cycle I have taken to typing a few words about the nature of being blocked in the hope that the simple act of writing will free up some stream of consciousness based prose. That is potentially a possible outcome of my typing exercise. Things could potentially just fall apart as well. A whole page of prose about the experience of writing just seems to be lacking in terms of communicating something interesting. Writing for the sake of writing can end up being very banal and off-putting to a reader. I know that my general plan was to keep taking notes in Google Keep to allow me to go back and work out some previous notes when the writing process slows down.