Twenty Minute Prototyping
Last week, Ethan Mollick posted about creating a functioning game with Claude Code and a one-shot prompt. As a first project with Claude Code, it seemed ambitious but appealing. I started with a single paragraph prompt to develop a short, simple adventure game in the style of the Sierra On-Line games I remember from the late eighties, with an AD&D 2nd Edition vibe. Getting a first, testable version was almost indecently fast. It then took several rounds of iteration to improve it, including a separate workstream to create retro style screens (Claude doesn’t have native image generation). The total active investment of time was about twenty minutes, with Claude Code running in the background while I did other things.
This wasn’t an optimal process: I was using a simple prompt in line with Mollick’s original post, which left a lot of planning and decision-making to Claude Code—exactly the opposite approach to what I would advise clients in one of my training courses. I twice exhausted my session usage allowance on a Claude Pro plan going down wrong directions, and burned through another £10 of extra usage because I was too impatient to wait for it to reset. Quite a lot of that was diagnosing issues with my GitHub setup that a competent developer would have spotted faster.
The result is far from perfect, but it’s playable and completable. Several friends with a similar frame of cultural reference said nice things about it. (The code is all on my GitHub repo, including a walkthrough file if you get stuck).
Let’s assume you’re not a middle-aged nerd: what is the wider relevance of this? Well, my immediate impression was: if something like this can be delivered with a casual, twenty-minute commitment of time, what could be done with a more deliberate process over twenty hours or twenty days?
The next day, I tried using Claude Code for a simpler and more applied set of tasks: building a two-column layout for blog pages on this website, creating category and date-based archives, and creating a new include file for sidebar content. As my site is built in Jekyll and hosted through Github Pages, this was an ideal use case. Here, I was much more successful. Unlike the game, this would have been within my competence to figure out manually, but it was too big a job to prioritise, and too small a task to outsource to someone more competent. Each of the elements was better defined and more clearly prompted than the game. I stayed well within my usage limits. And with the exception of one lingering error around Liquid variables, all of the tasks that I delegated to Claude Code were delivered on the first time of asking.
There was a moment doing this where AI genuinely felt like a productivity superpower: at the same time, I had a ChatGPT Deep Research query running, Claude Code was writing code to a GitHub branch, and I was fully engaged in a conversation with a client, just checking back on the (very good) results when I was done. I didn’t feel I was context-switching in the traditional sense. The cognitive load stayed consistent because the work stayed at the same level: strategic direction, not tactical execution. And the impact was being able to do three very different tasks in parallel, not series.
This isn’t going to turn me into a developer, and it shouldn’t. What it has done is widen the category of things I can reasonably do on my own: from little experiments, to small, well-scoped, testable pieces of real work that would otherwise sit in the too-small-to-outsource, too-big-to-bother pile. Used carelessly, that comes with risks, especially around invisible technical debt, where you don’t yet know enough to recognise when something is brittle or merely ‘working for now’. But used deliberately, with clear intent and bounded scope, it shifts the economics of solo and small-team work. For people who sit across strategy and execution, that feels like a meaningful change: not replacing expertise, but compressing the distance between an idea and a working artefact enough to make many more ideas worth testing.