Context Window 21

This edition covers Amazon’s new Nova Act browser-agent model, Google’s Gemini 2.5 reasoning model, Rachel Coldicutt’s responsible-AI dos and don’ts, fresh CLA research showing 82% of UK professionals upload third-party content into AI prompts, FT analysis on the limited employment impact of AI so far, Tyler Cowen on AI in his writing workflow, a Nature piece on AI and academic peer review, and new data points on AI scraping bots overwhelming publisher infrastructure.

Amazon released Nova Act, a new agentic AI model designed for browser-based actions. This is the agent model at work within Alexa+, the company’s upgraded voice assistant, available to 200 million Prime subscribers. I’ve been in workshops most of this week so I haven’t had a chance to experiment yet, but it looks really interesting—for publishers, I can think of a number of experimental use cases related to Vendor Central, Seller Central and Author Central that could be automated if Amazon walks the walk and supports this in its own systems. ​ Google released Gemini 2.5, its most advanced reasoning model. Notwithstanding integration with Google Workspace and tools, Gemini has always been a bit of an also-ran for me compared to ChatGPT or Claude, but this latest release scores impressively on performance benchmarks and is particularly worth investigating for publishers using Google apps for mail and productivity. ​ This opinion piece from Rachel Coldicutt has a good set of recommendations around ethical use of AI. A valuable point which I hadn’t sufficiently considered is the value of LLMs as a reasonable adjustment in the workplace for those who struggle with writing. It’s a good reminder that AI can be a leveller, not just a disruptor—especially for authors with neurodiverse profiles or language processing challenges. ​ One of Rachel’s other prescriptions is not to upload other people’s personal data into LLMs, which is sound advice. I’d suggest one build to her generally comprehensive advice: as well as personal data, think about what content you’re uploading with prompts. New research from the Copyright Licensing Agency shows 82% of professional AI users uploading third party content alongside prompts. Of course, what’s unclear in the data is whether they have the right (explicit or in the case of public domain content, implicit) to do so, or whether the LLM is private or will train on the data. But if your organisation doesn’t yet have internal guidance on what can safely be included in a prompt, this research suggests now’s the time. ​ The FT has interesting data and analysis of the effect of AI on employment, finding little evidence of large scale disruption. For messier, varied work (not a bad description of many publishers), the variety of tasks makes it hard for AI to replace humans. Interestingly, the two categories that are identified as being at risk are coders and writers, where the core of their work aligns well with AI capabilities. ​ For a counterpoint on the impact of AI on writers, this extended interview with economist Tyler Cowen on how he uses AI in support of his writing is an investment of time, but highly practical. His workflow hints at the kind of productivity gains editorial professionals may see, not from replacing expertise, but from augmenting it. And it’s a useful reminder that authors’ views and experiences of AI are not monolithic. ​ Nature has a thought-provoking piece on how AI is changing the peer review process for academic publishing. The key question it poses is the balance between AI as an assistive technology for writing, versus reliance on it to generate a review: given known biases and inaccuracies in LLMs, and structural incentives to publish more, there are worrying quality implications (I’m not even getting into the deeper issue of whether AI can rightly be considered a “peer”, but it’s worth pondering). ​ Two more data points on the issue of traffic from AI scraping bots that I wrote about last week: Ian Mulvany of the BMJ shared that it had experienced over 100 million bot requests from data centres in the Far East in just three weeks, and the Wikimedia Foundation reported a 50% increase in bandwidth for multimedia hosting since January 2024. I mentioned this at the IPG Lunch and Learn session for independent publishers on Wednesday, and since then, I’ve spoken to several publishers seeing a similar trend, particularly those with more content-rich websites. If you haven’t spoken to whoever is responsible for your website, the best time would have been yesterday, though today would suffice.

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on April 4, 2025