How AI Affects Publishing Depends on Existing Incentives
Earlier today I attended an online seminar hosted by the Creative Industries Policy and Evidence Centre, where Dr Paul Crosby from Macquarie University presented recent research on authors and AI carried out in Australia. Many of the sentiments expressed in the research were familiar: author concerns about the morality of AI training and the impact of AI outputs on their livelihood echoed many of the findings of the Cambridge research on AI and the novel published last November. What I found most interesting about the Australian research is that it was interpreted through economics and econometric analysis. One of the key questions was whether, if AI companies train on human creative work without compensation, the long-run incentives for cultural production are weakened. The concern was that more economically marginal forms of publishing such as literary, niche and experimental writing could be hardest hit. Framing the impact on authors and the creative industries as an externality is a helpful way of emphasising to policy makers that the economic benefits of training on copyrighted work accrue overwhelmingly to tech companies in the US, while the costs are potentially distributed across the wider cultural ecosystem globally. This is a textbook setup for undersupply of a public good (creative works).
One of the topics that came up in Q&A was whether the research addressed any differences between publishing segments—for example, how the experience of academic authors might differ from trade. I think this is the right question to ask: compared to the Australian research where nearly 80% of authors were against their content being used for AI training even if compensated, my professional experience is that when academic publishers have consulted their authors, a majority have opted in to licensing (see, for example, the reporting of Cambridge University Press’s consultation process).
An economic framing of this issue needs to account for how incentives differ across different parts of publishing. In my newsletter last week, I wrote about new research from the journal Organisation Science which showed the use of AI in submissions and peer review increasing. The researchers associated the highest levels of AI use with institutions that are highly focused on publication and journal rankings. “Publish-or-perish” is a known phenomenon in academia: here, AI is leading to overproduction rather than underproduction. I remarked that something similar is discernible in parts of self-publishing: subscription business models and algorithmic recommendation reward consistency and level of output. In these sectors, AI may increase long-run production precisely because the existing incentive structures encourage volume, frequency and optimisation. For the purpose of this piece I am not getting into the quality of that production, but there’s another question about whether there is specifically an overproduction of more incremental or derivative work and an underproduction of genuinely novel work—what Paul Crosby referred to in his slides as AI gravitating towards the mean.
None of this is a criticism of the Macquarie research, which provided a really useful supply-side view of the industry, as compared to the more demand-side reflection that I posted last week. But it does suggest that the impact of AI on publishing may differ significantly between sectors depending on how incentives already function. In some areas AI may weaken cultural production; in others it may intensify it. Understanding those differences feels increasingly important for authors, publishers and policy makers trying to develop coherent responses that address the range of publishing activities.