Context Window 34
Happy Friday—though I’m writing this the day before on the way back from a great day at the Publishers Licensing Services Conference in London. The agenda and other speakers were superb, offering plenty of food for thought. It was also great to meet Helen King, whose PubTech Radar newsletter I’ve really enjoyed recently (do sign up for it!) Connecting with Helen via Bluesky commentary on the conference felt nostalgically like the Twitter backchannel at publishing events in the early 2010s. Thanks to PLS and the IPG for the invitation to speak. My presentation at the conference was on AI threats and opportunities for publishers, which I structured as fifteen observations from the last two and a half years (slides here if you’re interested). Given the scale of the issues, many presentations took a big-picture perspective, so as a change of focus I particularly enjoyed a more tactical session on principles for successful AI licensing with subscriber Clare Hodder of Rightszone and Adele Parker of Taylor and Francis. Their best practices included being clear on what purpose content is being licensed for: foundational training, fine tuning and model evaluation, or reference/retrieval augmented generation. Good advice for all of us. One of the points that I referenced in my presentation was an open letter to the publishers of America which appeared on LitHub this week. It’s worth reading in full as a barometer of author sentiment, though, as ever, sentiment isn’t monolithic. The letter itself was a mix of unarguable points such as transparency, a primary focus on Big New York publishing (culturally it felt a little bit like the View of the World from 9th Avenue), and a set of demands that would be hard, perhaps impossible, for any publisher to acquiesce to. I came across it through Richard Charkin, who did a good job highlighting its limitations here. I waded in here. In my response to Richard, I mentioned this recent piece by Steven Johnson on how he uses NotebookLM in his research as an example of author opinion being varied. For transparency, Johnson collaborated with Google on developing NotebookLM, so he’s partly talking his own book. But it’s an interesting and hopeful vision that he sets out. Significant news this week: Cloudflare, which hosts about 20% of the web, announced that it would block AI crawlers by default unless publishers are compensated (it had previously provided blocking as an opt-in). Cloudflare will monetise through a pay-per-crawl system, creating a new revenue stream for publishers. Lots of news publishers have signed up; fewer from books, but O’Reilly Media, often a bellwether for new tech and platform developments, is one of them. If you’re not sure about the cost/benefit analysis of this, consider this comparison: the data show that Google crawls websites about fourteen times for each referral it sends; the equivalent ratio for OpenAI is 1,700:1, and for Anthropic 73,000:1. That’s a lot of bandwidth cost for little traffic. I’ve written about Vibecoding before: that is, creating code and applications using natural language prompts to LLMs. I’ve been pleasantly surprised by what I’ve been able to do, and it’s been a hit with many publishing clients who’ve developed niche applications. As an example of the genre, this video is pretty awesome: developing a complete custom app, including integration with third-party APIs, using nothing more than Slack messages as instructions. It’s not for complete beginners, but anyone branching out from basic scripts and macros will find plenty of inspiration. There are further questions about editorial processes at a major scientific publisher after multiple hallucinated citations in a computer science textbook. If not quite as egregious as prompt artefacts in published text, it’s still a really bad look. I’ve featured research on the use of AI in education recently; to give a qualitative perspective alongside the statistics, I recommend this first person piece in the Guardian. My eldest daughter is in the same disrupted cohort as the author, and I recognise many of the issues raised from recent teaching experience. It’s deeply relevant to educational publishers, lecturers, parents and citizens. Tim Harford’s column in the FT this week looks at employment impacts from AI, based on new MIT research. Most jobs are collections of tasks, some of which can be performed by AI. The big question is which tasks it takes. If it takes away the core task, the worker is left with lower status and compensation. On the other hand, if takes away routine tasks, the worker is left with more time for higher value activities.
This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.