Context Window 39

As AI capabilities expand, so does the gap between what’s technically possible and what’s useful and meaningful. Many of the links this week address the same question: where does human perspective add value?

The big news this week was OpenAI’s release of GPT-5. This was accompanied by the usual hyperbole about PhD-level performance and reduced hallucinations. Simon Willison has had preview access for several weeks, and unpacks the technical details here.

But my first experience of the new model was disappointing. Historically, ChatGPT has offered a range of models with different capabilities—for example, 4o for generation, o3 for reasoning, o4-mini-high for coding. The metaphor that I’ve used with clients is that it’s like a Swiss Army knife: you need to choose the right tool for the task. GPT-5 attempts to simplify this for users by deciding behind the scenes which model to use. But this isn’t communicated clearly to the user, and the results can be inconsistent.

The first prompt I gave it was reviewing a blog post I’d written: GPT-5 chose what’s likely to have been a fast, generative model that didn’t actually read the content, but gave me a detailed hallucination of how non-existent arguments could be improved. When I highlighted this and rewrote the prompt, it took a different approach: using a reasoning model, it spent three minutes reviewing the text and gave me substantial, specific notes which I was happier with. If GPT-5 isn’t going to give an explicit model choice in the way that its predecessors did, we’ll all need to learn how to prompt it toward the right model and output.

The FT ran a well-balanced piece with contributions from Naomi Alderman, Sarah Hall and Curtis Sittenfeld on whether AI can replace human authors, which makes a good case for human creativity. (Alderman also posted some good follow-up thoughts on Bluesky on how AI blunts weird, creative impulses.) The article was also prescient in this suggestion: rather than professionally published books being generated by AI, a more likely near-term prospect could be user-generated books that are bespoke to the individual reader. “There might be a system,” suggests Hall, “where you say ‘I want a book about this, this and this’ and you get it.”

Well, you can get it now. This week, Google released a new feature called Storybook for its Gemini model, which creates short illustrated children’s stories from user prompts, along with audio read-alongs. I tried it so you don’t have to. It’s technically quite impressive, though visually it isn’t fully consistent across ten linked illustrations. But on the current, banal level of quality I can’t imagine this representing any great threat to professional writers and illustrators.

Rebutting recent claims of a collapse in search traffic, a new Google blog post argues AI-powered search is driving more queries and clicks. But look closer, and a shift in user behaviour emerges. Google notes rising engagement with forums, videos, podcasts, and posts that feature authentic voices and first-hand perspectives. For publishers, this is a cue to rebalance content strategy. Authoritative reference material still matters, but content that feels human—expert commentary, behind-the-scenes insight, strong editorial voice—is gaining traction in an AI-flattened web. If AI answers the “what,” readers still turn to publishers for the “so what?” and “who says?” That’s where competitive edge now lies.

On the subject of web traffic, hosting provider Cloudflare published new research alleging that AI platform Perplexity is systematically trying to evade website blocks by using stealth crawlers.

How the open web evolves in response to these tensions is fundamental. Subscriber and friend Matt Webb shared the details of a conference taking place in New York later this month which is addressing some of the most compelling questions for publishers: how can publishers engage in this platform shift beyond being a dataset to be scraped? How can we avoid being trapped by a new wave of mega-platforms? And how can publishers on the web reach real users, build lasting relationships, and continue to have an equitable business model? (If any publishers want to sponsor me to attend this and write them a report/strategy on the findings, get in touch.)

Graham Lovelace’s Charting Gen AI newsletter reports on a new recommendation by Australia’s Productivity Commission to introduce a text and data mining exception in Australian copyright law. This feels like a rerun of the UK debate earlier this year—same arguments, same fault lines.

Plenty of companies in the wider economy are using AI as an excuse to cut junior roles. But agency CEO Antony Mayfield flips the logic: he’s hiring more early-career staff. Why? Because juniors paired with AI can outperform expectations—and often outperform the systems they replace. While some senior leaders boast about reduced headcount, Mayfield’s take is a reminder that avoiding entry-level recruitment isn’t just unkind—it’s strategically stupid. For publishers navigating AI change, it’s worth asking: are you building long-term capability, or just trimming cost?

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on August 8, 2025