Context Window 35
This edition covers Doug Shapiro’s media-trends slide deck and the questions it poses for any creative business, fresh Cambridge University Press polling on public support for AI training payments, an EU antitrust complaint against Google’s AI Overviews and grim data on news click-throughs, OpenAI and partners providing AI training to 400,000 US teachers, hidden prompts being inserted in academic papers to game AI reviewers, Bloomberg on the “tiny teams” era and Anthropic’s Project Vend experiment as a counterpoint, and research showing managers using AI to make decisions about their direct reports. The always-excellent Storythings newsletter linked to this media trends presentation from industry veteran Doug Shapiro. The focus is on the movie business, but there are plenty of transferable insights for publishers. It’s worth browsing, but if you want a quick tl;dr, slides 60, 64 and 65 set out some essential questions for creators in all media. Short term—what happens if GenAI enables over 80% of the quality at under 10% of the cost (p60)? Longer term—in the same way that the web was more than just flat documents online, and film became more than plays performed in front of a camera, what does AI production enable in terms of production and storytelling that isn’t possible today (pp64-65)? Cambridge University Press & Assessment released the results of new opinion polling showing that more than two thirds of the public back making technology companies pay for content used to train their AI models. Only 9% of respondents disagreed. We’ve become used to litigation against AI companies on copyright grounds, but a new legal fight opened with a group of independent online publishers filing an antitrust complaint against Google’s AI Overviews. Incidentally, the Independent Publishing Alliance, which filed the complaint, is not connected to the similarly named Independent Publishers Guild in the books and journals space. To underscore the significance of the complaint, new research showed that the proportion of news searches that don’t result in a click through to a publisher site grew from 56% to 69% in a year: in simple terms, consumers are searching for a topic, reading the AI summary in Google and going no further. OpenAI announced a partnership with Anthropic, Microsoft and two of the largest teaching unions in America which will deliver AI training to 400,000 teachers. Skills development is a huge opportunity for educational publishers and arguably one which shouldn’t be left to the AI platforms themselves. Nikkei found that research papers from more than a dozen academic institutions contained hidden instructions to AI models to positively review the papers, typically obfuscated through white text on white backgrounds, or tiny font sizes. That researchers feel it’s worth taking this step is a recognition that reviewers and publishers as well as authors may be using AI in their work. Extending this out into other areas of publishing such as trade, I wonder what proportion of manuscript submissions already contain hidden instructions: if authors believe that publishers or agents will evaluate using AI, the incentives are the same… Has anyone looked at this? Three linked takes on AI agents this week. Bloomberg ran an essay on how AI tools and agents are underpinning an era of “tiny teams”, where bragging rights for startups are determined not just by valuation but by how few people delivered a project. However, as a corrective to the idea that AI agents are ready to displace employment, Anthropic ran an experiment using their Claude AI model to manage a small shop at their San Francisco office. I guess it speaks well of their humility that they published the results as a case study: a couple of kids with a lemonade stall would not have made some of the mistakes the AI did. Vaughn Tan has a great piece exploring this and arguing that even simple businesses depend on inherently human decisions. A frankly depressing piece of research shows that 60% of managers use AI to make decisions about their direct reports, including determining compensation, layoffs and terminations. Two thirds of those using AI had not received any formal training, and more than one in five let AI make final decisions without human review. This is idiotic and won’t end well. I look forward to the inevitable lawsuits and managers’ embarrassing prompt histories surfacing in discovery. Finally, on a lighter note, an example of a ghost in the AI machine. One of my friends asked ChatGPT to recommend speakers for a literary event and got the response screenshotted below. Carmen Callil would indeed have been an excellent speaker but for being dead—though apparently her spirit is influential…