Context Window 26
This week’s newsletter starts with a question that’s been quietly nagging at many of us working with generative AI: are we saving time, or just skipping the thinking? A brilliant post introduces the idea of “cognitive debt”—the mental version of technical debt—where shortcuts today can cost us clarity tomorrow. It’s a useful lens for publishers figuring out how to scale AI responsibly, especially as new tools promise more speed, but not always more understanding. Creative strategist John Willshire published a great piece this week building on the concept of Technical Debt in IT development: with generative AI, the risk is Cognitive Debt, “where you forgo the thinking in order just to get the answers, but have no real idea of why the answers are what they are.” It’s a really important point, which highlights the difference between text generation with LLMs and writing as an embodiment of a thought process. As a lot of my strategy and writing work is solo, I really appreciate AI as a resource and sparring partner, but I’ve definitely experienced revisiting old LLM outputs and being uncertain about the underlying logic. To mitigate this, here are a few approaches I’ve found helpful: retaining older prompts and outputs in an archive so they can be revisited (ChatGPT’s Projects feature is a great way to structure that—does your organisation have a prompt retention policy or guidelines?); including a Chain-of-Thought request in prompts to see the process of arriving at an output, e.g. “Explain your reasoning step by step and highlight any assumptions”; similarly, if you’re outputting code such as Python or VBA, asking the model to include inline comments section-by-section; and, for non-trivial use cases, keeping metacognitive notes in the same way a scientist keeps a lab notebook: what I asked, why, my assumptions, how I evaluated the answer, and whether and why I made any decisions based on the output. (I do this by hand, using a Remarkable tablet that also stores my client notes, and I find that just transferring something into a handwritten medium seems to help my recall of it.) I’d be really interested to know if you have strategies for working with LLM outputs and will share any ideas in future. Amazon has released a new seller tool, Enhance My Listing, to generate A+ content and optimise product pages on the store. It claims a 40% increase in listing quality from the use of AI tools, and that 90% of AI suggestions are accepted by sellers. It’s being rolled out to US sellers initially. Kevin Anderson has been posting from WAN-IFRA’s World News Media Congress 2025: this post includes some of the highlights from WAN-IFRA’s forthcoming AI survey, including a gap between the perceived importance of AI and its usefulness. Still, the results show nearly half of publishers seeing increased productivity, and only 8% not using AI. The same conference saw the launch of a new initiative to protect news integrity: the key principles of consent, value, attribution, plurality, and partnership between publishers and tech companies are extremely relevant to other parts of the publishing industry. Speaking of partnerships, John Wiley continues to power ahead, with a new deal with AI platform Perplexity to integrate AI search and purchased Wiley content for institutional users. There’s a lot to unpack here: it positions publishers not just as providers of training data but as valued content partners. It also reinforces the importance of attribution and ensures alignment with how students are increasingly accessing content via AI-assisted search tools. The Wikimedia Foundation published its AI strategy, which is based around creating time for human editors to concentrate on “deliberation, judgement and consensus-building”. Unsurprisingly, it also includes a strong commitment to open source models. There’s a lot here that would serve as a model for any publisher. I know many of you are interested in the copyright ramifications of AI from the number of clicks on related links: if you’re in that group, the Authors Alliance published an interesting take on Studio Ghibli’s visual style being replicated by AI, explaining why it’s important that copyright should not protect style, for the sake of creators, and so the copyright system promotes creativity for the public good. Thanks to Gavin Marcus from Storywise for sharing the facepalm moment of the week: an Australian radio station used an AI-voice presenter for six months. That they were able to do so in the first place speaks to improvements in the technology. But the instant consumer backlash when the truth came out is a reminder: transparency matters, and trust is hard to earn—and easy to lose.
This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.