Context Window 60

This week’s headline stories sit at the uncomfortable boundary between assistance and dependence. From lost intellectual scaffolding to new AI “coworkers”, the question isn’t just what AI tools can do—but how we should use them without dulling the human thinking they’re meant to support.

There was an extraordinary cautionary tale in Nature yesterday: an academic who changed ChatGPT’s privacy settings and immediately lost all of his historic prompts and outputs, what he describes as “intellectual scaffolding that had been built up over a two-year period.”

On a human level, I feel sorry for him, though it’s a good example of how smart people can do dumb things. Not so much changing the setting: the consequences of that may not have been adequately signposted. But relying on an LLM to such a critical extent reminds me of John Willshire’s concept of Cognitive Debt. I say this regularly in my training courses, but AI should be a hand, not a brain…

There’s a related piece from MIT Press that I found more encouraging. It draws on research suggesting that students (or academics) who use AI to structure and outline their work show less executive brain activity—and more worryingly, that this “offloading” can persist even when they later write without AI. That’s the scary headline. But rather than treating this as proof that AI makes us stupid, the piece suggests we could use it more deliberately for thinking in dialogue. Used well, LLMs aren’t just content generators; they can act like a patient, endlessly available Socratic tutor, pushing students to clarify what they mean, test assumptions, and follow an argument where it leads. LLMs still needs boundaries and scepticism, but the upside is real: AI could help scale the kind of personalised intellectual back-and-forth that education rarely has time to provide.

More practically, Anthropic released a new feature for Claude called Cowork, which handles knowledge work in the same way Claude Code has become a leading software development tool. Cowork can read, edit and create files on your computer. From a week of experimentation, it’s been genuinely useful—and given the monthly cost of a Claude Pro subscription is little more for an employer than a single hour of minimum wage employment, the ROI will be almost instant.

Tools like Cowork and Code can be extremely useful, but their environmental footprint is considerably higher than just using an LLM for ad hoc prompts. As usual, there’s little data from the AI platforms themselves, but data scientist Simon Couch crunches the numbers in this useful blog post: a day using Claude Code uses more than 1300x as much electricity as a single ChatGPT prompt—but to put that into perspective, it’s still less than running a refrigerator for a day.

This is becoming an Anthropic-heavy week, but the company also released a formal constitution document for Claude, setting out its vision, values and ethics. It’s a genuinely fascinating document, which received input from outside experts in technology, ethics and even theology. If I were dictator of the world, something like this would be mandatory for every AI company, alongside an independent, peer-reviewed environmental impact statement…

Meanwhile, OpenAI set out its principles for integrating advertising into ChatGPT. Read alongside Anthropic’s constitution, the announcement is a useful reminder that different AI labs are solving for very different futures.

Publishers Hachette and Cengage have filed a legal complaint against Google. Reading the complaint, it’s interesting to see lessons learned from Kadrey v. Meta and Bartz v. Anthropic: there’s a clearly articulated theory of harm, and Hachette’s focus on children’s books and argument about creation of competing products is particularly sensitive for Google given it has a specialist tool that creates storybooks with AI.

Jonathan Woahn, co-founder of data monetisation platform Cashmere, has a guest post at Scholarly Kitchen setting out his view of the market for AI content licensing and why the value isn’t necessarily where publishers think it is. The market clearly believes in this thesis: also this week, Cashmere announced a $5 million seed raise, with strategic investors including Pearson, Ingram and Naver.

Cloudflare CEO Matthew Prince made a similar argument in Davos, arguing that his platform had willing buyers for content, but absent publisher participation, Google would win the AI market by default.

On the question of regulating access to content, new research shows that 80% of news publishers are now blocking AI bots from their websites, with a smaller proportion differentiating between training and information retrieval use cases. By contrast, when I checked this morning, not one of the Big 5 trade publishers in the UK were addressing AI bots through robots.txt on their websites. That could be a strategic choice—those sites are largely shop windows rather than publishing platforms—or it could be sleeping at the wheel.

Finally, how does one benchmark how effective LLMs are at common tasks? For researchers at the University of California San Diego, the answer is to get them to play a lot of D&D. The twelve year-old me is here for it (let’s be honest, the forty nine year-old version too).

This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.

Written on January 23, 2026