Prompter's Intent
I’ve been leading training sessions on generative AI for nearly three years now, and over that time the advice that I’ve given on prompting has been relatively consistent: good results generally depend on a degree of detail and specificity about context, objective, and outputs. I was interested to see Ethan Mollick noting a change of emphasis in the developer documentation for GPT-5.5, OpenAI’s latest thinking model:
Practically, that shift is going to add a degree of complexity to prompt libraries and other efforts to share best practices within organisations that are running a mixed economy of models: prompters will need to match the right prompt structure and model. But it’s interesting because it’s not a new way of thinking about giving instructions, and it relates to one of my favourite periods of history: Prussia’s recovery after its defeat by Napoleon and its emergence as the dominant military force in nineteenth-century Europe.
Even if you’re not interested in military history, you may have heard Helmuth von Moltke the Elder’s aphorism that “no plan of operations can be at all relied upon beyond the first encounter with the enemy’s main force”, often simplified to “no plan survives contact with the enemy”—or in Mike Tyson’s earthier version, “everyone has a plan until they get punched in the mouth.”
All the versions of the thought recognise that things go wrong with the best-constructed plans, as when Prussia met Napoleon Bonaparte’s armies at Jena-Auerstedt in 1806. As part of a far-reaching programme of military reforms following that defeat, Prussia put in place what became the model for a professional general staff selected by competitive examination rather than social status, institutionalised wargaming as a planning and training tool, and later pioneered the use of railways for rapid mobilisation. One part of that professionalisation was the command philosophy which came to be called Auftragstaktik, or mission-type tactics (as distinct from Befehlstaktik, or order-type tactics). The idea was that rather than giving specific, rigid instructions about every action to be carried out, commanders would instead specify their intention and goal, and their subordinates would have the discretion to adopt whatever tactics made sense to realise that goal. The principle continues to be part of military doctrine—the British and American armies call it mission command. One of my favourite books on strategy, Stephen Bungay’s The Art of Action, looks at how to implement it in business.
The point is not the military history itself, but what the philosophy might suggest to us about using GPT-5.5 and similar models well. Mission command works when certain prerequisites are in place: it assumes competence, trust, shared situational understanding, and shared risk tolerance. I’d suggest that the same will be true for delegating tasks to AI models and agents. Situational understanding and risk tolerance—the context provided by the user, either on a prompt-by-prompt basis or as a system preference—will be critical. But the other prerequisites are harder. In the military example, competence and trust are developed and tested systematically: I wonder how many users will invest time in getting to know the new models and evaluating their accuracy in particular domains—as opposed to using them unthinkingly, or getting one bad result and setting them aside. Successful users will take the time to work out the right mix of models based on capabilities and costs, and the correct prompting approach for each of them—shorter, certainly, but moving away from precise instructions to creating the conditions in which delegated work can succeed.