AI Found a Bug in My Code

I was playing around with the SalesForce CodeGen language model. It is set up to generate new code from a prompt. But I wanted to see how it would analyze existing code.

I had the model look at some existing code and rank the probability of each token appearing given the previous tokens. I also had it suggest its own token and compared the probability of my token to the probability of the model's token.

I found the smallest file in my codebase and fed it in. Brightness represents unlikelihood of each token. How red it is is how much more likely the model's token would be. Hover over the tokens to see what the model would have suggested.

The code snippets here and below require JavaScript to be enabled. Sorry I was too lazy to render them into plain html.

I tried to make the code more readable for the AI by adding some comments.

While the comments themselves are a surprise for the AI, the redness around them as diminished. Except for the return statement which gets brighter red.

I did not plan this, but it turns out there is a bug in my code. When an event listener is removed during dispatch, I return from the function. Hovering over the suspicious code, the model correctly suggests continue.

I am going to use this for code review from now on.