Context Window 15

This edition covers The Guardian’s licensing deal with OpenAI alongside the New York Times rolling out internal AI tools, reflections from a BookNet Canada Q&A on how publisher questions are shifting from theory to practice, the limits of OpenAI and Perplexity’s Deep Research products, the ElevenLabs/Spotify audiobook partnership against Audible’s stance on AI narration, Descript’s new Custom GPT, and concerns about the UK Government using an AI tool to analyse its own AI and Copyright consultation responses.

Two big updates from the news media this week. The Guardian is the latest news business to reach a licensing deal with OpenAI: following the pattern of earlier deals, this includes access to Guardian content within ChatGPT, and the deployment of OpenAI software to Guardian teams. Meanwhile the New York Times is rolling out AI training and tools for staff, including some OpenAI products (this is notwithstanding the NYT’s ongoing litigation against OpenAI). This speaks to an interesting wider point. Both newspapers have been sharply critical of AI in their coverage, highlighting issues including copyright and sustainability. But from a business perspective, both have concluded that it is to their advantage to find ways of using AI in their operations. You don’t learn much about an emerging technology from the sidelines. It is a pragmatic approach to a challenging issue, and one that publishers of all sizes could learn from. ​ This has been on my mind this week as I did a question-and-answer webinar for BookNet Canada, following up a presentation last September. It was interesting to note a shift over the last six months: while the questions touched on big issues for publishers like copyright, provenance and sustainability, there was a very practical focus: less about theoretical issues, and more about how to implement AI effectively and responsibly (especially in smaller or resource constrained organisations). ​ Lots of people have been trying OpenAI’s new Deep Research (or competitors like Perplexity’s Deep Research model, which doesn’t have a $200 entry price): Benedict Evans highlights some of the issues: “OpenAI is trying to get the model to work out what you probably mean (computers are really bad at this, but LLMs are good at it), and then get the model to do highly specific information retrieval (computers are good at this, but LLMs are bad at it). And it doesn’t quite work.” ​ ElevenLabs announced a new partnership with Spotify, which will accept audiobooks created using ElevenLabs synthetic voice models. This is an interesting development: it could help to address the large number of print titles which haven’t been available (or economically viable) as human voice narrations, and it’s good to see Spotify pushing for clear labelling of AI content to consumers. On the other hand, Audible remains the dominant player in the audio market and does not currently accept audiobooks narrated by third party AI tools. It will be fascinating to see how many authors are prepared to bet on Spotify growth, and whether there’s enough consumer demand to shift Audible. ​ The audio/video editing platform Descript is an elder statesman in AI terms—I recall trialling it in 2018 back when I was at Hachette. It released a new Custom GPT this week, which allows a user to plan and develop a video within ChatGPT, and then jump across to Descript to generate imagery and voice, and make any post-production tweaks. It’s quick and easy to use, and if the results aren’t spectacular, I should also acknowledge that I didn’t invest much time in learning how to use and optimise it. Besides the tool itself, the novel thing is the Custom GPT integration so the primary user experience for ideation is within ChatGPT—it will be interesting to see whether other software tools follow this approach. ​ Back in issue 12 of this newsletter, I speculated about whether the UK government consultation on AI and copyright would be analysed using AI. In a trade press interview this morning, Baroness Kidron says that she believes that the IPO has asked to do so, though she holds back from giving an opinion on this. In a personal capacity, I have to say that I feel very nervous about the idea. It’s not clear which LLMs are used by the government consultation tool, but it is highly likely that analysis would be by a tool created by an organisation that is a party to the consultation. Consider a simple thought experiment: if a third party, however well meaning, offered to provide the secretariat for an official activity that it was a party to and might benefit from, are there any circumstances where that would be considered acceptable?

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on February 21, 2025