Context Window 54

This edition marks three years since the launch of ChatGPT and covers OpenAI’s new shopping research feature, the 2025 Edelman Trust Barometer on AI in the workplace, Google Labs’ Learn Your Way pedagogical experiment, an EdTechnical forecasting competition on AI in education, a checklist of risky AI habits, a Descript guide to slop-free content repurposing, fresh research on AI-generated advertising performance, the AI podcast factory Inception Point, The Atlantic’s data-led AI crawler strategy, and a podcast appearance with Alison Jones.

Happy Black Friday. This Sunday marks three years since ChatGPT first appeared: a toddler in human years, already learning to run, leaving a mess in its wake, and showing signs of what it might grow into. It has even added a new word to our vocabularies—slop—and several of this week’s links explore whether AI-generated content has any value or is just that. ​ ChatGPT introduced a new shopping research feature this week, which matches products to a user query. I tried using it to find some gifts and it did a pretty good job of matching recently published books to recipients, though it pulled pricing and availability from a range of bookshops and publishers. It’s also unclear how frequently this data is refreshed or how well it handles backlist titles. These are questions anyone who cares about book discoverability should be thinking about too. ​ The new 2025 Edelman Trust Barometer on Trust and AI has some really interesting conclusions for anyone implementing AI in the workplace: about 60% of employees would accept AI aimed at productivity rather than cost saving, and the provision of high quality training increased employee willingness to use AI (unsurprisingly, I endorse that message). ​ Google Labs launched a new pedagogical experiment called Learn Your Way, which uses AI to transform linear textbooks into interactive learning materials. They claim an 11% improvement in retention scores for the AI texts over traditional ebooks. There’s a waitlist signup for users to upload their own PDFs: for publishers, that might be an experiment to consider, but it will also be interesting to see what copyright guardrails are built in to control use of third-party content. It also raises rights questions: at what point does an AI-modified textbook become a derivative work, and who owns those adaptive outputs? ​ On the subject of education, EdTechnical is running a forecasting competition on the impact of AI on education to the end of 2028—the same timeframe forward as from ChatGPT’s release to now. I’m sure plenty of subscribers have a view on this, and there are prizes for the best contributions. ​ Rahim Hirji’s newsletter has a fantastic list of nearly sixty AI habits that would get you sued, fired or embarrassed. Based on your score, you can determine whether you’re a cautious sceptic, a normal human or a walking liability according to Rahim’s scale. There’s a part of me that thinks that with such a fast-moving technology, if you’re not experimenting—and occasionally making mistakes—you’re not learning. But it’s certainly safer to learn from other people’s errors. ​ The AI platform Descript has published a useful guide to slop-free content creation, in particular repurposing an existing asset into different media formats. This is something content and marketing teams in publishers do all the time, and the guide provides some clear, practical advice. ​ New research suggests that AI-produced adverts can’t be dismissed as slop, as they perform considerably better than traditional commercial messages. I have questions about the research methodology, particularly the sample size. And the study measures perceived effectiveness rather than real-world conversion—still, it’s a sign that AI-generated creative may not be as disposable as many assume. ​ On the subject of AI slop, The Wrap returns to a subject that I’ve discussed before: the AI podcast studio Inception Point, now generating 3,000 podcast episodes a week, with a team of eight people. It’s easy to dismiss this as slop, and as I’ve previously argued, it hurts the signal to noise ratio for traditional podcast publishers. But it’s working, as Inception Point has over 400,000 subscribers. The piece goes into new detail about how the company operates. What’s striking is how tightly their production model is tied to algorithmic opportunity—filling keyword gaps with astonishing speed. It’s not hard to imagine the same volume x velocity playbook applied to ebooks. ​ A more traditional publisher, The Atlantic, has struggled with AI platforms crawling its site for data: one company tried to crawl over half a million times in a week. This piece looks at its strategy for managing access to its content. What’s particularly useful is the data-led approach that the Atlantic took, using its logs to determine which bots brought referral traffic and which should be blocked (less than a third brought any value). On that point, I’ve spoken to two publishers in the last six months who were proposing to make decisions about their websites without even reviewing their logs. Be more Atlantic. ​ Finally, I don’t think I’ll ever be comfortable listening to my own voice, but I did a very traditional podcast interview with the brilliant Alison Jones this week, talking about our respective careers in books, digital change, and the impact of AI on publishing.

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on November 28, 2025