Context Window 36
This edition covers ChatGPT’s new Agent mode and what it could automate for publishers, Matt Webb on governance lessons from Anthropic’s vending-machine experiment, a WeTransfer terms-of-service flap, NotebookLM’s new pre-curated notebooks of licensed and public-domain content, Dave Morris on AI in his writing practice, Anthropic’s Claude for Education partnerships, Condé Nast and Hearst signing Amazon Rufus licensing deals, the launch of Latam-GPT, and research on how AI is shaping the words we use. A really significant tech development this week with the release of ChatGPT Agent capabilities, which are rolling out to Pro, Plus and Teams users (though not yet in the EEA and Switzerland, presumably for regulatory reasons). Look for the Agent Mode option in the Tools dropdown in your chat window. I just used this to retrieve data from three websites and databases, one of them requiring login credentials, and produce a consolidated chart and commentary. As a small business owner, I’m expecting this to be a significant timesaver. As a publisher, I can think of a list of workflow tasks that I would want to experiment with automating, starting with sales and marketing data… On the subject of AI agents, in my last newsletter, I wrote about Anthropic’s experiment with an AI-powered vending machine at their office. Matt Webb has a really smart take on how businesses experiment and think through governance of AI agents. Recommended if you’re thinking through agentic tools for your publishing workflow. Not a good week for WeTransfer, which faced a backlash from users after new terms of service suggested user content might be used for “for the purposes of operating, developing, commercializing, and improving the Service… including to improve performance of machine learning models that enhance our content moderation process” (my emphasis). A lot of people lost their heads at this—for example, I saw one audiobook narrator suggesting that WeTransfer would be cloning their voice. But my reading of the admittedly clunky legalese is that this refers to content moderation, which is a legal obligation on the platform, rather than to content generation. This feels like a combination of bad wording and (understandable) user sensitivity rather than a genuine threat. But it’s a good reminder to check the terms and conditions, especially for free services (though research suggests 91% of users don’t bother). I’ve written before about Google’s NotebookLM tool, which allows users to curate and interact with data sources for research. This week, Google introduced pre-curated notebooks including the complete works of Shakespeare, earnings reports for the world’s top 50 companies, and licensed content from The Economist and The Atlantic. The commercial model isn’t public, but from a feature perspective this could be attractive to many other publishers. Florent Daudens reviews the move here, and positions it as a shift in content from static to searchable to conversational. That theme is also explored by games writer Dave Morris in this piece on how he uses AI in his writing: it’s good to see a positive perspective from a writer, balancing more critical viewpoints, and the use cases he identifies—research, managing knowledge/continuity, translation (albeit on a limited scale), and managing OCR errors—are all highly relevant to publishers too. Anthropic previewed several new MCP integrations for Claude for Education, including its previously-announced partnership with John Wiley, and a new deal with educational content streaming platform Panopto. The content partnership is smart, but the complementarity with enterprise tools like Panopto helps to weave Claude (and the publishers it’s working with) into the fabric of teaching and learning. Condé Nast and Hearst are the latest large publishers to reach licensing deals with Amazon for its AI shopping assistant, Rufus. For smaller publishers, these deals pose an interesting dilemma: should they seek similar partnerships to monetise niche content, or will these opportunities primarily benefit scale players whose brands carry broad consumer trust. This piece on Latam-GPT, an LLM being developed specifically for Latin America and the Caribbean, raises important questions about training data disparities and cultural representation. While such region-specific models promise greater linguistic and cultural sensitivity—including support for indigenous languages—they inevitably face trade-offs around reduced data volumes, fewer advanced features, and significant infrastructure hurdles. There’s a quote often attributed to Marshall McLuhan: “We shape our tools, and thereafter our tools shape us.” Evidence of this comes in new research which shows that words used more frequently in LLM outputs are also becoming more common in spoken communication.
This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.