Context Window 41

AI is changing how people produce, discover and evaluate content. This week’s stories show how fast those shifts are happening. From Amazon’s AI shopping assistant to Grammarly’s reimagining of student writer, publishers face questions about visibility, value and voice.

This week, Amazon announced the rollout of its Rufus AI shopping assistant in the UK. US shoppers have had it for a while, and international availability may vary. This strikes me as a more consequential short-term development for most publishers than the incremental arms race between the big LLMs.

Historically, the key questions for many publishers and sellers were standing out in search results and winning the buy box. As more customers use this tool, it changes the game: instead of standing out from seven or eight organic results and a handful of ads, you need to ensure your book is the one that Rufus recommends for a given query. The buy box is a moot point if your item isn’t getting recommended in the first place. What drives that recommendation? For starters, consistency and getting the basics of metadata and product pages right. There’s a good overview of Amazon’s current AI tools in this update.

If you need to be convinced that working with Amazon AI is really going to matter for publishers and anyone else selling on the platform, this interview with David Luan, the head of Amazon’s AGI Lab, is a must-read. Besides a belief in the potential of AI agents that matches his boss Andy Jassy, Luan makes a really interesting comparison with AI model developments and digital photography: for years, the number of megapixels in cameras grew, but it wasn’t the most important aspect of them. What matters is how cameras—or models—are used. The future is specialist models with training data from trusted sources and private experimentation, rather than generalist models trained on the web. (Incidentally, that points to a valuable future for quality, licensed content for training.)

I hate to sound cynical about our friends in Seattle, but this kind of corporate PR aside, it’s not always the most transparent organisation to work with. So this HBR piece is a rare behind-the-scenes look that is genuinely useful: a case study on how Amazon approached quality control when using AI to create product pages. There’s plenty of good insight for AI projects more generally: start from an audited position, define guardrails, run experiments, and use different models for different roles—one to generate hypotheses, another to verify results. (On a more micro scale, that last one is something that I recommend for vibecoding: I’ve often used ChatGPT or Claude to define a problem and write an initial script, and then Gemini to debug it.)

“I don’t think we’re anywhere close to where we need to be. But we’ve gotten it to a place where [AI is] comparable to other sources of information,” said Amin Vahdat of Google. It’s not just about Amazon this week. The other big news that caught my eye was a new Google report on the environmental impact of AI. The headline claims are that AI impact is less than previously estimated and decreasing: a single Gemini prompt apparently has similar impact to watching under ten seconds of television, and the carbon footprint of using Gemini has decreased by 44x over the last year. When I talk to publishers, this is one of the biggest objections to using AI, second only to copyright. So at face value this is meaningful to publishers’ understanding and use of AI. But it’s important to stress that many of the economies Google refers to are on a unit/per-prompt basis, and could be offset by higher overall usage (the Jevons Paradox, for the economists in the room). And, unlike Mistral’s recent environmental report, these claims haven’t been peer reviewed.

Subscriber Thad McIlroy has published an update to his AI strategy checklist for publishers, which is well worth looking at (not just because he kindly namechecks my training courses).

Grammarly is launching a new generation of AI-powered writing agents that go far beyond spelling and grammar correction, helping users plan, structure, write, and revise their work through a multi-step, conversational process. For example, a student can ask the agent to brainstorm essay ideas, generate an outline, turn that into a rough draft, and then refine tone and clarity based on feedback from personas—all within the same tool. This matters because it signals a fundamental shift in how writing is being done: generative AI is no longer something to be banned or bolted on, but a central part of creative workflows. For educational publishers, the challenge—and opportunity—is to meet learners where they are by developing materials that reflect this new reality. That means rethinking assessment support, creating resources that focus on writing as a process, and helping both educators and students use AI thoughtfully rather than fearfully. For every kind of publisher, the exam question is how their authors might use this kind of AI…

On this subject, let me ask you a personal question. How well do you write? Is your prose distinctive from LLM outputs? AI platform Storywise has launched a competition where the writer whose work is judged most different from an AI model can win a publishing deal.

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on August 22, 2025