Editors, Authors and Trust
I’ve been trying to write something here roughly once a week, largely but not exclusively about my experience working with AI in publishing. I hadn’t planned to post anything else this week, but John Willshire linked to a post from Rachel Andrew on managing publishing operations in the context of generative AI and I wanted to respond to it with some thoughts about how the issues it raises affect book publishing.
Rachel and I are looking at a common issue from distinct but adjacent perspectives: content operations in online/technical publishing for Rachel, books for me. The problem she articulates is immediately recognisable: LLM outputs strain the implicit contract between an expert contributor and an editor. Once, an editor could rely—not perfectly, but reasonably—on a contributor to create accurate material grounded in their own knowledge. The first-order effect of generative AI is that editors are having to spend more time scrutinising that material because of the propensity of LLMs to generate plausible nonsense.
Reading Rachel’s piece made me think about what happens next. If the first-order effect is increased scrutiny, what might the second-order effect be if editors feel less able to assume cognitive ownership behind the text?
Verifying the content of publications is already challenging and time-consuming. At least in technical publishing, many facts are essentially testable: does this software display and behave as the writer suggests? You can reproduce the steps. For trade or academic non-fiction, it can be considerably harder. What happens when you’re dealing with interpretation rather than demonstration, social science rather than computer science?
At that point, the boundary between subject expert and editor begins to blur. Editors may find themselves operating much closer to their contributors’ level of subject knowledge simply to feel confident in what they are publishing. That is not a new dynamic. Good editors have always needed intellectual sympathy with their authors, but generative AI increases the load.
Incidentally, I would treat automated detection of AI writing as directional at best. Your mileage may vary, but I recently put the same human-written text into the two most popular AI detectors surfaced by a Google search: One said it was 99% written by an LLM, the other 3.75%. They’re not just inconsistent but also potentially discriminatory according to some studies.
When I was teaching operations management in a UK business school in the first couple of years after ChatGPT’s release, the only reliable measure I felt I had for assessing students’ work was face-to-face tutorials. The gap between the superficial fluency of written assignments and students’ ability to explain their thinking in a classroom setting was sometimes startling. But that approach is time-intensive and doesn’t scale. In fast-moving publishing environments, it’s simply not practical. Online workflows are designed around speed and throughput; editors may have hours, not days, to review material. There isn’t much slack time in that system for oral discussion, however valuable it might be.
One thing that does vary across different types of publishing, though, is cadence. In fast-cycle publishing—news media, developer documentation, online technical content—there may be less scope for back and forth. In that environment, even a small increase in epistemic uncertainty compounds quickly.
Book publishing, by contrast, still operates on a slower clock. Trade non-fiction usually has weeks—sometimes months—of review built in; academic publishing layers peer review on top. That doesn’t eliminate the problem Rachel describes, but it does change its texture. Slow publishing creates more surface area for problems to be identified and corrected, and perhaps more scope for conversational or iterative forms of verification that simply wouldn’t be viable in a daily publishing rhythm.
If, as Rachel suggests, trust at the level of the text becomes harder to assume, then one natural response is that trust shifts upward to the level of the person. Editors may lean more heavily on contributors whose thinking and working practices they have seen over time. From a risk management standpoint, that is entirely rational. But it introduces further questions.
Sooner or later, the trusted writers no longer represent the state of the art, or are simply no longer working. Where does the pipeline of future talent come from? How is trust built with people who do not yet have a track record? And at a time when many publishers are trying, rightly, to diversify the voices they platform, is there a risk that increased epistemic caution inadvertently privileges those who already have established relationships? None of this requires bad intent: it emerges from good intentions and natural incentive structures.
If that analysis holds, then one possible response is not to retreat into narrower circles of trust, but to think more deliberately about how trust is built and maintained.
I can’t speak knowledgeably to Rachel’s context, but a few ideas for book publishers, offered in that spirit.
Start by setting expectations. Research last year shows that fewer than 30% of book publishers have a clear AI policy. In contrast, John Wiley has published particularly clear contributor guidelines. At best, a norm of disclosure changes the dynamic. If contributors are open about how tools were used, the editorial conversation can focus on what matters: is the thinking sound? Are the claims defensible? Is the argument genuinely theirs?
However, if someone is determined to use AI in their writing without being candid about it, they may well get away with it. But if it came to light in your business, would you have a clear procedure for dealing with it? Would you retract—perhaps easier if publishing online as opposed to printing books? My working assumption is that sooner or later, this is going to happen to almost every multi-contributor publisher: every time I see a story on this, there is an element of thinking “there but for the grace of God go I.”
Third, move some of the evaluation back into conversation. Commissioning discussions, structured outlines defended live, conversations that probe reasoning—these are not new techniques, but they may become more valuable in an AI-saturated environment. What is being tested is not necessarily fluency, but cognitive ownership.
Fourth, invest intentionally in the pipeline. If the gravitational pull is towards “writers we already know and trust,” then publishers who care about renewal and diversity may need to counterbalance that pull with mentoring, development programmes or other structures that allow trust to be built incrementally rather than assumed wholesale.
Of course, none of this is cheap. It is time-consuming editorial labour at a moment when editors and publishers are already under revenue and cost pressures. But it is labour that protects both quality and plurality. The open question is whether AI narrows trust networks—or forces us to build them more deliberately.
I would be genuinely curious to know how others are responding to this.