Becoming an AI-Assisted Coding Convert
I took a long time to accept the idea of AI assistance for technical tasks (coding, debugging, rollouts, bug triage, etc.) I argued against AI for coding until the middle of 2023, and I didn’t truly get it until earlier this year.
Some of my resistance was that things people were showing me AI could do, I already had automated. When someone showed me that the AI could create the boilerplate for a class, I wanted to respond with, “Rails calls that scaffold, and it does the tests, too.” I wasn’t excited because the demos weren’t using AI for things I needed.
However, my primary objection to AI-assisted development was accuracy or, rather, the lack of accuracy. In one of my first exchanges with a generative AI, I quickly got it to produce an (incorrect) proof for P == NP. I continued to experiment with AI with my tester hat on and let’s just say my early introduction to AI-assisted development did not inspire confidence.
Despite my strong initial skepticism, I’ve found myself using AI coding assistance much of the time over the last nine months.
So what changed?
Partly, the models got better. But also, I changed. I discovered how these tools can fit into my personal development workflow and save me time. I also found that the current level of accuracy wasn’t as big of an issue as I initially thought.
So, how do I use these tools?
First off, I pretty much only interact with AI through in-editor chat. I have years of pair programming experience, so treating the AI as a less skillful pairing partner feels natural to me. If I want to write a method, I don’t start in the editor; I begin by prompting the AI in the chat. It feels similar to asking a pairing partner, “How should we write this method?” I give feedback on the code the AI returns, which is sometimes specific, and sometimes I ask questions. I iterate back and forth with the AI like this until I’m happy the code works and is well-designed. Only then do I copy it over to my editor and test it. My workflow may not be a typical assistance workflow, but it works for me.
Also, I mostly use AI for research. It turns out a lot of time I spend “coding” is actually spent looking things up in docs and on the internet. I think we’ve all reflexively opened a browser tab to look up a method signature or a cryptic error message while debugging at some point. Instead of using the browser, I now use the AI chat in the editor. It seems like a small thing, but saving the context switch keeps me focused. The AI gives me answers faster than I could find them myself, and when relevant, it can give me answers in the context of my codebase.
I also don’t use AI all the time. It is obvious in retrospect, but one of my “a-ha” moments with AI was realizing it isn’t all or nothing. I can use the AI where it helps and ignore it when it doesn’t. Personally, I don’t use it for languages and frameworks I know well. I’m faster without it and enjoy coding with those tools, so I let myself have fun and don’t use the AI. By contrast, I use AI extensively when working in languages where I’m less skilled and when working on tasks that I find less enjoyable. For example, despite years of trying, I’m barely a mediocre front-end JavaScript developer, so I use AI for JavaScript. I also use AI when I’m learning a new framework or tool. The AI doesn’t get frustrated with me when I ask it to explain various bits of code over and over and doesn’t find the bizarre analogies I used to understand the world annoying.
But what if it’s wrong?
So, that’s how I use AI assistance, but how have I managed my objection around accuracy?
I’ll be honest, I have some cognitive dissonance around the fact that current accuracy levels are good enough to be helpful to me. I’ve spent a lot of time thinking about how I could find AI useful, and also be frustrated that it feels like it is often confidently wrong.
I won’t claim to know precisely why both these things can feel true, but I have two ideas that have helped me accept the cognitive dissonance. First, humans often remember negative things more than positive ones and extreme experiences more than mundane ones. So there’s a good chance I forget all the times the AI gave correct but not remarkable answers, and I remember all the completely off-the-wall hallucinations even when those are significantly less common.
The second thing that’s helped me accept the cognitive dissonance is thinking about time spent versus time saved. This is easiest to understand with an example, so let’s do a thought experiment with some round numbers. Let’s assume that using AI takes me 30 seconds, including submitting the prompt and assessing whether the response was useful. Let’s also assume that the AI I’m using is especially bad and is only correct 10% of the time. Coding assistants are correct far more often than this, but 10% makes the math easy. Finally, let’s assume that the AI saves me 10 minutes when it is correct.
Submitting 20 prompts to the AI takes me 10 minutes (20 * 30 / 60). Since it is wrong 90% of the time, 18 responses will be unhelpful, and 2 will be useful. The two helpful responses save me 20 minutes. If we subtract the time it took to use the AI (10 minutes) from the time saved (20 minutes), we can see that using the AI saved us 10 minutes even though it was wrong most of the time.
As I said above, this is how I’ve been thinking about it to help me make sense of my own observations. It may not accurately reflect my reality or anyone else’s. But thinking about things this way has helped me understand why I find AI useful and also regularly get frustrated with the answers it gives. Both can be true.
How about you?
I’d love to hear how others use AI assistance for dev tasks since I suspect my preferred workflow isn’t typical. Have other folks run into the same cognitive dissonance I have where the AI is both helpful and also it feels like it is constantly incorrect? How have you reconciled those two things?