Context Window 53
This edition covers a new Cambridge Minderoo Centre report on UK novelists’ fears of AI replacement, Marc Zao-Sanders on how to read AI research with vested interests in mind, two authors disqualified from a New Zealand book prize after AI was used in their cover designs, Google Gemini’s new SynthID watermark detection, a model rebuttal from the Watershed in Bristol after being wrongly accused of using AI, the new AI-powered Google Scholar Labs, and a broadcasting case study on the leverage AI gives experienced practitioners.
This week has been full of stories that show how fast the conversation around AI, authorship, and creative integrity is moving. From new research on writers’ concerns to real-world disputes over AI-generated artwork, it’s clear that evidence and transparency matter more than ever. A new research report from the Minderoo Centre for Technology & Democracy at the University of Cambridge received significant media attention this week for its headline claim that half of UK novelists believe AI will replace their work (there’s a lot more beyond the headline and I really encourage you to read it, especially if you’re a trade fiction publisher). It’s a strong piece of exploratory, qualitative research, though its focus on novelists alone excludes the equally important perspective of non-fiction writers (particularly since many authors work across both fiction and non-fiction). I have expressed some doubts about the methodology and whether one can reasonably draw population-level inferences from a convenience sample of 258 authors. But this is really to say that I hope it demonstrates the topic is important enough to do a larger scale study across types of authorship. Coincidentally there’s a very useful piece by Marc Zao-Sanders in HBR on how to make sense of research on how people use AI, which makes important points about vested interests and inherent bias. For clarity, I’m not suggesting these are problems with the Cambridge research, only that the principles are worth keeping in mind with every data source on the subject (including what I write). I think I’m going to take this sentence from Marc’s piece as the mission statement for this newsletter: “The path to a sensible, defensible, and useful view of what’s going on lies in the synthesis of many different sources.” Second only to the Cambridge research in press coverage this week was the news that two eminent authors were disqualified from a leading literary prize in New Zealand for the use of AI in their cover artwork, after new rules were instituted by organisers. This highlights a number of issues: the fact that the authors were unaware of the use of AI by their publisher, the concomitant need for transparency between authors and publishers, and the practicality of applying rules on this fairly and consistently. On this subject, I came across an interesting practical AI feature this week: Google Gemini now has the ability to look at an uploaded photo and detect SynthID watermarks that are added by Google’s own image generator to determine whether it is likely to be real or generated. It takes one to know one? This could become a helpful tool for publishers trying to verify the provenance of submitted images. However good automated detection gets, it’s never going to be completely foolproof in detecting AI, and there’s a real problem with false positives. How do you defend yourself if you’re accused in error of using AI? (This is far from a theoretical problem: I know book publishers this has happened to in the last year.) This is a really interesting case study from the brilliant Watershed in Bristol, which was accused of using AI in its marketing. Their response is really clear, and includes an explanation from their designer. It’s a model of clarity, and it raises an uncomfortable question: how many of us could offer an equally confident and well-documented rebuttal of our creative processes? I suspect Google Scholar is one of the company’s lesser-known offerings, but it’s an essential tool for many researchers and academic publishers. This week Google released an updated, AI-powered search called Scholar Labs in limited preview. This is particularly helpful in providing a short summary of relevance to a search topic for each item returned, and could be especially valuable in the exploratory stages of research or literature review. I’m always interested in AI case studies from other industries, and this post from broadcasting about developing a complex content workflow in under thirty minutes offers a compelling look at how AI can accelerate production. Of course, the prerequisite for doing this was having an MCP server that already interacted with key systems: the book or journal publishing equivalent would first require an existing integration with bibliographic databases, content management systems and other infrastructure. But the underlying principles hold true. I particularly liked this assessment of the developer’s role in the results: “To be honest, I don’t see a world where AI replaces engineers. It’s more about all engineers operating at a fundamentally different velocity, albeit constrained by purpose. The knowledge I’ve accumulated over three decades didn’t become irrelevant—it became leverage. I knew what to ask for. I could evaluate whether what [AI] produced was sensible… The AI handled the tedious translation of intent into implementation, yet the customer still owns the ‘purpose’ that drives the intent.”
This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.