Context Window 58
Happy New Year. The first AI stories of 2026 aren’t about speed or scale, but about authenticity—who is speaking, what can be trusted, and where responsibility sits when things go wrong. That’s uncomfortable territory for publishers, but increasingly unavoidable.
Instagram boss Adam Mosseri posted an end-of-year essay on content trends. I don’t quite buy his opening claim that the big shift for 2026 is that authenticity itself is becoming infinitely reproducible. What is becoming reproducible is the appearance of authenticity—its surface signals rather than the thing itself.
Where his argument becomes more convincing (and more relevant to publishing) is in what follows: in a world of abundant, increasingly bland content, people will seek out a rawer, more honest aesthetic, and will pay more attention to who is speaking rather than just what they produce. That feels directly applicable to current debates about provenance and trust. It also intersects neatly with efforts to hallmark human-created books. Mosseri argues that, over time, it may be more practical for humans to claim and authenticate their own work than for AI content to be reliably watermarked—a framing that feels both pragmatic and uncomfortable in equal measure.
The big AI news of the last week has been widespread revulsion at AI model Grok producing nonconsensual, explicit and violent imagery. As I was finishing this newsletter early on Friday morning, Grok announced that image generation and editing would be restricted to paying subscribers. The appearance of this kind of material on X has led many to question whether governments, businesses and others should be using the platform. The same might be asked of publishers.
A different kind of inauthenticity and harm: author Shaun Rein has reported a YouTube channel cloning his voice and taking ideas and material from his book to create new videos. Shaun’s post about it is admirably thoughtful and balanced. What I’m sure of is that there will be many more examples of this in 2026.
My two cents: treat this example as a fire drill for your editorial, marketing and legal teams. If one of your authors contacted you right now about something like this, would you know what to say to your author, what action to take, who would manage this, who you would contact?
For anyone working on controlling or licensing access to AI content,Datalicenses.org is a community-curated list of relevant initiatives that’s worth bookmarking—I particularly like the way it can be filtered by common use cases like blocking scraping, expressing preferences and getting compensated. (Related: I missed this back in December, but the RSL Licensing standard has now been endorsed by more than 1,500 media and publishing companies.)
Researchers at Stanford and Yale have shown that they were able to extract large, verbatim sections of notable books from LLMs. Notably, for the first Harry Potter, they extracted 96% of the text. Before this causes alarm, there are some important caveats: the research was done last August and September, and the AI companies concerned were notified and have had time to strengthen their guardrails. For OpenAI and Anthropic models, the researchers also had to use jailbroken versions. However, it’s concerning that both Gemini and Grok didn’t have to be jailbroken to return verbatim sections.
Ed Nawotka has a good piece at PW looking at the integration of AI into digital reading experiences, something I’ve written about at length in previous issues. A key update is Amazon stating that its Ask This Book feature cannot be disabled by rightsholders. It’s an example of how tighter AI integration will force uncomfortable trade-offs on authors and publishers: what if a strong ‘no AI’ clause in a contract now effectively implied no Amazon?
I also question Amazon’s position that no additional rights are required for this because book content is only used as a prompt, which isn’t retained. That may be true, but for it to be useful, what data is that prompt actually querying—and under what rights framework?
An article on an academic philosophy website, and comments from readers, paint a damning picture of the use of an AI tool in a major journal publisher’s operations. Taken at face value, the reported failures aren’t subtle matters of style or preference, but basic production errors that directly undermine scholarly trust. For publishers, the lesson is straightforward: a time- or cost-saving that’s achieved by pushing those costs onto authors is no saving at all.
New research from Gallup conducted in Q3 of last year showed that nearly half of US workers had used AI at work a few times a year or more, nearly a quarter used it weekly, and ten percent used it on a daily basis. The gap between how widely AI is used and how unevenly its risks are managed is becoming harder to ignore.
Finally this week, Tim O’Reilly—long-time technology publisher and one of the clearest thinkers on how digital markets actually work—makes a compelling case in his latest essay that AI’s biggest risk is not intelligence run amok, but an economy that forgets how value circulates. The hopeful note is that this isn’t inevitable: if we design AI systems, business models and policies that prioritise participation and shared benefit rather than pure extraction, AI could still become a powerful engine for broadly shared prosperity rather than another era of narrowly concentrated gains.
This was originally published in my email newsletter. To receive weekly updates on how AI is affecting the publishing industry, sign up here.