Context Window 22
This edition covers Shopify CEO Tobias Lütke’s AI-first hiring memo, a behavioural study suggesting reader preferences for human authorship are weaker than stated, an argument that AI may improve content quality by writing for the model as well as the reader, Anthropic’s launch of Claude for Education with Socratic prompting, a salutary LSE blog on “efficient inefficiency”, OpenAI’s persistent-memory upgrade to ChatGPT and the Temporary Chat workaround, Meta’s Llama 4 multimodal models, and a court ruling allowing the New York Times’s copyright case against OpenAI to proceed.
Shopify CEO Tobias Lütke has taken a striking position on AI in an internal memo shared after it started being leaked: employees must now justify why a task can’t be done with AI before asking for new headcount. It is a significantly more aggressive framing than anything I’ve seen in publishing, and one that bakes AI adoption into company culture and performance reviews. There are potential benefits here—Lütke notes teams achieving “100x” output with AI—but it also risks over-indexing on speed at the expense of nuance, context, or creative friction. As a thought experiment though, it’s worth asking: what would a publisher look like if AI was treated as a default? A new behavioural study challenges the idea that readers genuinely prefer human authorship. Researchers gave participants an AI-generated short story, telling half it was written by ChatGPT and half that it was by an acclaimed author. As expected, those told it was AI rated it lower on emotional engagement and originality. But both groups spent the same amount of time and money to read it. The study suggests that while readers say they value human creativity, their behaviour doesn’t always reflect that—raising questions for publishers and authors alike about how much process matters versus product. On the subject of authorship, I’ve seen a lot of debate this week about this piece arguing that AI generated content will improve content quality. One of the more provocative thoughts is that AI models behave more like the rational economic consumer of theory than the messy, emotional humans we actually are. Where traditional content strategy has leaned into storytelling, urgency and emotional triggers, models are more likely to respond to clarity, structure, and surface-level credibility signals. That has implications for publishers beyond content quality—it suggests that metadata, labelling, and openness may increasingly shape discoverability in AI-mediated channels. The audience isn’t just human anymore, and some content may soon be written as much for the model as for the reader. Anthropic released Claude for Education, a version of their LLM optimised for higher education users. Interestingly, given concerns about the impact of AI on critical thinking, the student experience is based around Socratic questioning and providing guided learning rather than answer generation. For launch, the product is partnered with schools including Northeastern and LSE. Many of the benefits of AI that I see working with clients are based on increased productivity: the typical 5-10% productivity boost that I’ve seen working with publishers is in line with research from the University of Lausanne which found nearly three hours of time saving, and from Canva showing four hours. But there’s a salutary reminder in this blog post that time saving on the wrong task can lead organisations to “efficient inefficiency”. OpenAI announced a big upgrade to ChatGPT’s working memory yesterday: in future, the model will be able to reference all past chats to provide more personalised responses. If it’s your main AI tool, greater contextual knowledge may be really helpful. But if you want to go off the record, the Temporary Chat isn’t able to access or impact memory. Meta announced its Llama 4 open source models, which stand out in terms of multimodal capabilities and huge context windows (that is, the number of tokens that can be processed at a time). The performance benchmarks look impressive, but given Meta’s recent history with authors and publishers, it’s hard to imagine these models being widely used in the sector. The latest development in AI litigation saw a federal judge rejecting an attempt by OpenAI to dismiss the New York Times’s lawsuit, and characterising the publisher’s accusation of contributory infringement as “plausible”. The rulings really determine the shape of a future day in court rather than its outcome, but the Times seems to be ahead on points.
This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.