Systems and Meaning

Andrew Savikas recently wrote that one of the best ways to see the future of knowledge work is to watch how software developers work. It made me smile because I got to know Andrew nearly twenty years ago through his work at O’Reilly Media and their Tools of Change for Publishing conference—“watching the alpha geeks” was a core philosophical and strategic tenet for O’Reilly.

As Andrew points out, historically, that approach has been a reliable guide. Developers built tools for themselves such as version control, distributed collaboration, or chat protocols, and the rest of us later inherited them in polished, productised forms. Rsync became Dropbox. IRC became Slack. What begins as something only a developer could use eventually becomes something any of us can without thinking too hard about it.

Andrew argues that we’re seeing the same pattern with AI. Developers using tools like Claude Code aren’t primarily writing code. They’re writing specifications, prompts and test plans. They’re reviewing outputs, refining instructions and iterating on drafts. They are, in effect, managing knowledge and managing teams, even if the “team” now includes language models. It’s what I wrote about last week, moving up the stack of knowledge work. So it won’t be too surprising that I think he’s right about the direction of travel.

But the more interesting question isn’t who will follow developers this time. It’s who is already structurally and cognitively aligned with working with AI agents. In a comment, Andrew identified management, organisational behaviour, and learning and development as relevant disciplines. I would add publishing to that list.

That might seem a counterintuitive suggestion. Not all software developers are enthusiastic about generative AI. Open-source communities have been vocal about licensing and ethics. And not all publishers are instinctively opposed to it; many are experimenting and getting good results. Still, in my experience of working with both groups, there are recognisable, dominant logics in each profession.

Developers often approach AI instrumentally: does it work? Does it improve throughput? Can it abstract away complexity? There is often a bias towards experimentation and iteration.

On the other hand, for publishers and writers, AI is often seen predominantly through a normative lens. Is it legitimate? How was it trained? What does it mean for copyright and authorship? For creative livelihoods and personal identity? That caution is understandable. Publishing’s core assets are words, rights and relationships. When your business model—or in the case of writers, your professional identity—depends on the ownership and integrity of text, a technology trained on it without consent, credit or compensation feels like an existential threat.

And yet. There is an extraordinary opportunity in a new, general-purpose technology whose substrate is language itself. Printing industrialised text. Word processors digitised it. The web distributed it. Search indexed it. But none of those technologies modelled and generated language as language. Whatever is happening behind the scenes of a large language model—tokens, weights, embeddings, probability distributions—the medium through which humans interact with LLMs is linguistic. The input and output are language. The refinement process is linguistic. Language is not just the interface. It is the operating environment.

Think about what that means in practice. When you use a search engine, the initial query is in language, as are the results that are returned, but the system runs on links and rankings. The words are a trigger; the mechanism is structural. Similarly, when you use a spreadsheet, the labels are linguistic, but the power lies in formulas and cell relationships.

With a large language model, the words are not just a trigger or a layer on top of other elements. They are the mechanism. The quality of the language you provide—its precision, relevance, structure and awareness of audience—directly shapes the quality of what comes back. Fluent writing is no longer just personal pride or cultural capital: it is operational leverage. And that is the medium publishers spend their professional lives refining.

The central paradox of publishing and generative AI is that an industry that is instinctively cautious about, or downright hostile to, generative AI because it cares about the integrity of authorship and language may also be unusually well equipped to work with it because it understands how language behaves.

I say this not as an abstract theory, but from extensive experience in the last three years. I’ve delivered dozens of workshops, to hundreds of organisations and thousands of delegates. My observation is that there is a marked difference between publishing and non-publishing clients in terms of quality of prompting. Non-publishers often default to shorter prompts with more left to the LLM to interpret. Writers and publishers need little encouragement to prompt well: their prompts provide context, avoid ambiguity, and consider audience and tone. They care about structure and throughline. Formalising instructions in configuration files and skills makes sense to people who have grown up with house style guides. Those instincts matter.

Developers talk about “debugging” AI systems, by adjusting prompts, isolating failure modes or refining constraints. Editors do something related but not identical. They refine language against the author’s purpose and intended meaning, the reader’s needs, or house style. Prompting, by contrast, often involves discovering what you want through structured interaction with the system. The cognitive loops are not the same. But both demand comfort with iteration, version comparison, and disciplined refinement of language. In generative AI, meaning becomes something you shape through carefully structured linguistic input. Editors are instinctively familiar with that discipline.

So if writers and publishers are well suited to working with AI, why aren’t they leading? Among many, there is a strongly-held view that generative AI is built on massive theft of intellectual property, and any accommodation or experimentation is seen as acquiescence. The arguments around those issues need to be determined by the courts and society. But legal resolution and capability development can and should run in parallel. If publishers and writers remove themselves from practical experimentation while waiting for legal clarity, they risk more than delay or courts finding against them. Norms, workflows and expectations around how language models are used will be shaped by those who are actively working with them. If the industry is not at the table while those norms are being developed, it will have less influence over how they evolve.

Part of the answer may also be cultural. Editorial training optimises for polish and precision. AI skill, at least today, rewards messy iteration, tolerance for imperfect drafts, and rapid experimentation. Normative caution may not just slow adoption; it may also inhibit the kind of failure-tolerant practice that builds technical fluency.

There is also a difference between shaping sentences and thinking in systems. Developers tend to treat prompts as modular components, version-controlled assets, parts of repeatable workflows. Most publishers that I work with are not yet operating at that level of process abstraction.

This is where Andrew’s advice holds. Watch the developers. There is a lot that publishers can learn from how they structure their interaction with AI agents. But as he suggests, it is not a one-way lesson.

If AI’s core substrate is language, then those trained to think rigorously about language should not be peripheral to how AI changes knowledge work. The opportunity is not for publishers to become engineers, but to combine editorial instincts with developer-style process thinking: to treat prompts as briefs, but also as reusable assets; to apply structural sensitivity to meaning, but within repeatable workflows. That posture is different from either instinctive rejection or uncritical adoption. It is pragmatic, disciplined, and strategically alert. In my notes on the IPG Spring Conference last week, I reflected on the structural challenges for publishers and asked who is radically rethinking their business model, costs, product and unit economics. As contentious as the thought will be to many, it’s interesting to think about what a small publisher built from the outset around editorial vision, systems thinking and AI capabilities could look like.

The frontier technology runs on language. Developers can teach publishers how to think in systems. Publishers can teach developers how to think in meaning. That exchange will only happen if publishers are in the room. And right now, too many of them are standing outside it.

Written on February 16, 2026