Anthropic's Employment Research
A new Anthropic research paper on AI and employment is getting a lot of attention. At face value, it suggests that the sectors most vulnerable to disruption from AI are white-collar, analytical professions: law, finance, management, media and arts, many academic disciplines. The headline finding and a radar chart of affected areas have been repeated ad nauseam in my LinkedIn feed. But a careful reading of what is a genuinely interesting paper explains why I find it less alarming than first appearance might suggest, despite it affecting the fields closest to my own work.
In fairness, the paper is a good deal more nuanced than some of the social media reporting around it. Its central measure of “observed exposure” combines three things: data on professional tasks from O*NET, a database sponsored by the US Department of Labor; estimates of whether a large language model could complete those tasks at least twice as fast as a human; and actual usage data from Claude. It is not trying to measure whether AI can do everything associated with a given profession, but whether a meaningful share of the tasks within that profession are both technically feasible with LLMs, and already starting to appear in practice. That’s useful data: in particular, the real-world LLM usage isn’t visible in other data sets. However, it is already narrower than some of the headlines.
The major limitation of this is that it doesn’t measure quality. A particular task is considered “exposed” in this framework if it can be completed in under half the time. That’s very different from saying that it can be done well: LLMs allow me to generate text on any topic in seconds, but that’s not the same thing as producing good writing. This post took me about an hour of writing and editing time; if I had been prepared to use an LLM output verbatim, I could have saved fifty-eight minutes. But the time spent writing and editing helped me to develop my thinking, and even if it won’t win any awards for prose, I hope the end result is better for it. The distinction between capability/speed and excellence really matters in all of the sectors that the model regards as exposed.
The second thing to bear in mind is that, by definition, this is a backward-looking framework, based on what tasks people have done historically, as opposed to what they are doing now or how their behaviour changes in future. Any assessment of impact assumes that jobs continue more or less as before, rather than evolving—given Rich Ziade’s comments on procedural debt, that may be the case for many organisations, but it fails to address how work evolves. The more consequential effects of AI may not come from direct substitution of existing tasks so much as from redesigning workflows around the model: fewer handoffs, wider managerial spans, faster turnaround expectations. It cannot take account of work moving up the stack, or productivity gains being invested in other areas. Again, to be fair to the researchers, the paper admits it will not capture every channel through which AI could reshape the labour market, but I think that caveat should be taken seriously.
That’s also why I am not especially worried by the fact that the sectors I work in score so highly on this framework. High task exposure does not mean end-to-end replacement. It means more of a certain basket of tasks is becoming available to LLMs. Those are not the same thing. As I have previously argued, in many creative, managerial, and research-heavy fields, the scarce value may instead move upwards rather than disappear: away from generation, and toward framing, judgement, interpretation, client trust, editorial courage, and deciding what is worth doing at all.
The paper’s own data support this reading. The researchers find no systematic increase in unemployment for workers in the most exposed occupations since the release of ChatGPT in November 2022. The only suggestive signal is a slight slowing of hiring among younger workers in exposed fields, and even that is barely statistically significant. If tasks are being exposed but jobs are not disappearing, the most plausible explanation is that work is reconfiguring, precisely the story that the substitution framing misses.
So yes: this is important research, and worth taking seriously. But the right conclusion is not “AI will replace our jobs.” It is closer to: “AI can now do more of the tasks that make up jobs than many people are comfortable admitting, while still, probably, falling well short of the best human work, and the bigger story may be how organisations reconfigure themselves around that fact.”