The Author's Bargain
This piece was commissioned by James McConnachie and first published in the Spring 2026 issue of The Author, the journal of the Society of Authors. It is reprinted with their permission.
Few developments in recent memory have generated as much concern among authors as generative artificial intelligence (GenAI). Those concerns range from the use of copyrighted work without consent or compensation to train GenAI models, to systemic issues such as environmental impact, labour displacement and the risk that cultural production becomes flooded with cheap, synthetic material. These concerns are serious, legitimate and widely shared.
Recently published research from Cambridge University’s Minderoo Centre for Technology and Democracy has helped to give shape and voice to authors’ concerns: over half of respondents believed GenAI would displace their work entirely in future. As a snapshot of author concerns it is valuable. But it was based on a survey of 258 novelists, and trade non-fiction and academic writers should also be heard. In my own work, including consulting and writing, I have encountered deep scepticism and outright hostility towards GenAI. But I’ve also seen curiosity, experimentation and pragmatism. Author opinion is not monolithic, and building a worldview based on the assumption that it is risks missing both threats and opportunities.
It is an axiom of strategy that getting to the future you want requires starting with a clear-headed assessment of the present. On that basis, it is extremely unlikely that GenAI as a general technology will disappear. Even if prominent AI firms collapsed tomorrow, the underlying technology would remain. Wishing for a bubble to burst is not a strategy.
Similarly, litigation by authors and other rightsholders against AI companies is important, and there are multiple cases pending around the world. However, the outcomes may differ by jurisdiction, by how the training data was collected, and by how models are used.
It is quite plausible that there will be further legal settlements and financial penalties of the sort proposed in Bartz v. Anthropic. But none of the decisions to date roll back the clock on the technology itself, and, like the long-running Authors Guild v. Google litigation which presaged many of the arguments, we should consider the likelihood and impact of “fair use” decisions in the US courts.
Suggesting that GenAI is not going away is not a fatalistic acceptance of technology companies’ past conduct or a dismissal of the righteous anger felt by many creative people. It is a basis for thinking about what a better future looks like, concentrating on steps that authors and publishers can actually take.
The first principle is one of agency: while authors cannot always sway decisions by technology companies, courts or governments, they should be able to make choices about the publishing process for their work. That demands clarity from publishers about where GenAI will and will not be used in the publishing process – not just in editorial but in ancillary areas that still shape an author’s reputation and a reader’s experience, such as illustrations, translation and narration of audiobooks.
There are wholly unacceptable examples from this year where authors found out after the fact that GenAI had been used for marketing materials or cover design – in the latter case, two leading New Zealand authors were disqualified from a major literary award through no fault of their own. Recent research by the Book Industry Study Group in the United States and Canada showed that fewer than a third of publishers had a clear AI policy. If yours does not, demand one. Policies do not need to be perfect on day one, and will evolve given the pace of change. But they should be developed through consultation with staff and authors.
If authors should be able to make choices, publishers should also be clear about any trade-offs involved. Last summer, over 1,500 authors signed an open letter in the online magazine Literary Hub calling on publishers to resile from the use of GenAI in particular areas, particularly design, editing and audiobook production. More broadly, it demanded that publishers should not change working practices or replace human work, fully or partially, with GenAI. At the time, I commented that the consequence of following that demand to its logical extent would be delisting books from Amazon, because that retailer is using GenAI across so many aspects of its business. For some authors, that would be a choice worth making; for others, it would be an act of commercial self-harm.
Authors must also make choices about the licensing of their books to GenAI models, which is carrying on apace, notwithstanding legal arguments. For many, the past training of GenAI models is so beyond the pale that they would not consider supporting it in future, and there is also understandable concern about the impact on the market for creative work. Where publishing contracts did not anticipate or address GenAI licensing, it’s right that publishers should seek permission from authors and agree appropriate terms with them. But there are benefits to participating in licensing beyond any immediate commercial gain. The AI industry argues to government that it needs changes to copyright law because it is so hard to obtain training data legitimately. The existence of a vibrant, commercial licensing market is the strongest rebuttal to that argument, whether that is through individual publisher deals or a collective license of the sort being developed by the Copyright Licensing Agency.
Most controversially, authors can also make choices about whether to engage with GenAI in their own creative processes. There is no doubt that some writers are playing an unsophisticated volume game, using GenAI to generate their books: 2025 even saw several cases of prompts being left in published books. It is hard to imagine a more effective way to erode reader trust than leaving fundamental creative decisions to a probabilistic model.
A more constructive framing sees GenAI as a hand, not a brain. For some writers, it can serve as a practical accessibility aid, helping them structure thoughts, reduce overwhelm, and get started when the blank page is the biggest barrier (many neurodivergent writers highlight this). Similarly, as a productivity aid, it is a diligent, generally accurate transcriber of meetings and interviews.
GenAI is also being used for research and analysis, particularly in non-fiction and academic writing. With appropriate oversight, Deep Research models and specialist tools such as Elicit and Scite can speed up the research process considerably. In his acceptance speech for the 2025 FT/Schroders Business Book of the Year, author Stephen Witt went further, musing about the potential for GenAI to sit within a book, answering questions from readers. This positive vision is closer than you might think: Google’s NotebookLM tool was developed by Steven Johnson, himself the author of more than a dozen books, as a means of marshalling research sources and allowing a user to pose their own questions.
Taken together, these examples point towards a future that is neither one of technological inevitability nor one of principled disengagement. GenAI is already embedded in the publishing ecosystem, but how it is used, and on whose terms, remains contested. That contest is not yet settled, and authors still have leverage within it.
What matters most is not whether GenAI exists, but whether it is deployed in ways that respect creative labour, maintain reader trust, and preserve meaningful human agency. That requires clearer policies from publishers, honest communication about trade-offs, and a willingness to draw lines where use becomes deceptive or corrosive rather than supportive. It also requires authors to make informed choices: about licensing, about platforms, and about how – or whether – they incorporate GenAI into their own work.
There is a real risk that poorly governed use of GenAI cheapens culture, floods markets, and undermines confidence in books. But there is also a risk in treating it solely as an external threat, rather than as a technology whose impact will be shaped by the norms, standards and contracts we establish now. That may require hard, sometimes uncomfortable negotiation.
There is an irony here. The very flood of synthetic content that many authors fear, ultimately, may make human-created work more valuable, not less, at least to readers who care about originality, voice and trust. When text becomes cheap and abundant, what readers look for is not volume but signal: a known author, a trusted publisher, a credible recommendation. In such an environment, human authorship becomes a mark of distinction rather than an assumption. This is not a comforting argument for those whose livelihoods are under immediate pressure. But it does suggest that the long-term value of creative work will depend less on out-producing machines, and more on clearly differentiating human creativity from automated imitation.