Context Window 33

It’s been a really significant week for legal developments: while the newsletter has more of a copyright focus than usual, the courtroom updates are balanced with some really interesting technical developments from Creative Commons, Anthropic and others (skip down if you’re less interested in the legalities). It points to the fact that, however long a road to a settled legal and licensing position, there are immediate practical uses for AI in publishing. ​ There were separate rulings in Bartz v. Anthropic and Kadrey v. Meta that the use of copyrighted material for AI training is, in certain circumstances, Fair Use. Regarding training, Judge William Alsup described the author position as “no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act”. However, despite that apparent similarity, the rulings differed in important aspects and there is enough nuance for both sides of the issue to claim positive aspects from them. There’s an excellent side-by-side comparison of the judgements here. ​ While finding that the ultimate use of copyrighted material may be Fair Use, Bartz v. Anthropic also suggested that the initial collection of books for training represented copyright infringement, and there will be a trial to assess damages for this, which could be considerable. If what this establishes is a precedent that training is Fair Use but that material should be legally obtained, commercial and practical arguments for both publishers and developers point strongly towards a collective licensing model of the sort being pursued by PLS, CLA and ALCS. ​ Intrinsic to these debates is the question of whether AI training is transformative, and whether the replication of verbatim sections of books is an aberrant behaviour or a common issue. In that context, new research showed a Meta model reproducing over 40% of the text of the first Harry Potter, but highly inconsistent results across different models and books. There’s a great explainer here. ​ Meanwhile, a completely new author lawsuit was filed against Microsoft. ​ In a parallel development, photo library Getty dropped some aspects of its UK litigation against Stability AI, though litigation continues on other aspects, and in the US. But it emphasises the importance for plaintiffs of pursuing precise, winnable arguments rather than broad claims. ​ Meanwhile, taking a different approach to content and rights, Creative Commons announced a new project, CC Signals, to allow publishers to communicate their preferences on how data is reused by AI developers. This feels more aspirational than necessarily enforceable, but for Open Access publishers in particular, it’s an interesting development. ​ Changing gear, on a super practical level, the in-built AI functions within Google Workspace are getting more and more useful. There’s a great and very practical set of examples of AI functions in spreadsheets here. ​ Anthropic released new features allowing users to create AI-powered apps (“interactive artefacts”) using Claude. For tasks that require structured interaction, like workflows, data analysis or working with content, this would offer publishers simple creation and greater control of user experience. I could see this being really useful for creating simple internal apps for publishers, as well as reader-facing experiences. ​ This kind of app is particularly interesting to me having read this piece by ProPublica’s Ben Werdmuller arguing that AI features should be developed further down the stack, in models, browsers or operating systems: “Publisher websites and apps are not destinations in themselves and no amount of AI will make them so… My proposal is this: you should consider what’s actually the most useful experience for the user, rather than what furthers your own interests, and make a bet on that, instead.” ​ Finally, not specifically related to publishing, but for general interest, games have often been one of the ways that we understand progress with AI: think of Garry Kasparov’s chess matches against IBM Deep Blue in the late nineties, or Go champion Lee Sedol losing to AlphaGo in 2016. Chess and Go are interesting use cases because while they are highly complex, the moves available each turn are clearly bounded by rules. So I was really interested to read a series of articles on using LLMs to play the classic boardgame Diplomacy, which has a significantly more complex set of interactions based on negotiation and deceit (it was famously the favourite game of public figures, including JFK and Henry Kissinger). AI media company Every pitted a series of LLMs against one another, finding that they displayed some of the worst of human behaviour. And they also wrote a detailed guide to how they achieved it.

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on June 27, 2025