Execution is Free

I had a couple of conversations this week that inspired a quick coda to my previous post on using Claude Code for prototyping ideas. The day after I posted it, I spoke with Tom, one of my most thoughtful and creative friends—someone who is about as far away from AI boosterism as it would be possible to get. A decade ago, he had spent several thousand pounds with a web developer building a location-aware mobile website. He asked how much information Claude Code needed to build something similar. I copied his sixty-word WhatsApp, clarified half a dozen questions that Claude asked in response, and it took under two minutes to replicate the core functionality.

Over the next couple of hours, working asynchronously, we prompted and pushed a series of iterative developments. Tom’s final WhatsApp message on Tuesday night read: “today actually blew my mind.”

His reaction stuck with me. The next day, on a train to Cambridge, I found myself discussing it with Jill Jones, a fellow director at the scientific publisher where I sit on the board. Jill has been involved in technology publishing for decades, and we had a really interesting conversation about what this shift actually means.

At face value, Claude Code is epically, existentially bad news for Tom’s developer—who pays thousands for a website if a non-specialist can prompt one in minutes? But the conversation with Jill helped me think about this differently.

What are you paying for in this scenario? Code? If so, the AI is going to beat the human every time. But I don’t think many people are paying for code alone. You’re paying for code, experience, discernment and liability.

I can prompt code that works, and for low-risk use cases that may be enough. But for a more complex thing, I might see that something works without spotting the bug, the security issue, the architectural fragility, the technical debt—the kind of design decision that works today but becomes impossible to maintain or scale. Jill’s example was the chemistry textbook where a single error in a formula created a safety risk. That’s the kind of thing an experienced developer or reviewer would (hopefully) recognise instantly, often because they’ve been burned by it before. Code is becoming cheap. Judgment remains valuable.

Just as importantly, most AI models will always try to execute lawful instructions. They’re not engineered to push back on an idea. That’s a fine line for a developer or consultant to walk—you don’t generally prosper by questioning every instruction. But, sparingly at least, pushing back against a client (especially if it is against your short-term interest but in their long-term interest) can be a radical act of honesty and trust-building. Sometimes the most valuable contribution isn’t building the thing, but saying why it shouldn’t be built, or why it should be built differently. And if the website fails, Tom has recourse against his developer. If the Claude Code version falls over, Tom has to go back to the agent and fix it himself. You’re not just buying expertise—you’re buying someone who will take responsibility when things go wrong.

Which starts to suggest a broader pattern. If the cost of execution trends toward zero, professional value doesn’t disappear—it moves up the stack. From coding to framing. From research to insight. From building to governing. From delivering to helping decide what is worth delivering in the first place. That feels true for developers. It also feels uncomfortably true for strategists, directors and other knowledge workers. I wonder how many of us in the latter category are getting out in front of this shift.

Written on February 5, 2026