It's Risk, Not Capability, That Determines Which Jobs AI Actually Replaces
A new study of 923 occupations finds that organizational risk tolerance -- not technical feasibility -- is the binding constraint on AI job displacement. Data scientists face 70% automation exposure while physical trades show near-zero.

A research team has quantified something the AI employment debate usually ignores: the gap between what AI can do and what organizations are willing to let it do.
The paper, published April 6 on arxiv, decomposes 923 occupations into 2,087 atomic work activities and scores each on two independent axes: technical feasibility (can AI do this task?) and business risk (will an organization actually deploy AI for this task?). The second axis -- risk -- turns out to be the binding constraint.
The Key Finding
Most existing studies measure AI "exposure" -- how many tasks in a job could be automated. This study measures something different: the Occupational Automation Index (OAI), which accounts for both capability and the institutional willingness to deploy.
The difference matters. A task might be technically automatable but carry regulatory, liability, or reputational risk that prevents deployment. The researchers call this the "institutional premium" -- the gap between what's possible and what's permitted.
Who's Most Exposed
| Occupation | OAI Score | Key Factor |
|---|---|---|
| Data scientists | ~0.70 | High technical feasibility, low deployment risk |
| Routine cognitive work | High | Tasks are structured, errors are recoverable |
| Physical trades | ~0.00 | Manual dexterity + safety liability = absolute resilience |
| Caretaking roles | ~0.00 | Human judgment + liability = absolute resilience |
The finding that data scientists face 70% automation exposure is striking -- this is a field that didn't exist 15 years ago, built around the kind of pattern recognition that LLMs now perform.
The Cognitive Risk Asymmetry
The study identifies what it calls "cognitive risk asymmetry": non-routine cognitive work faces disproportionate exposure compared to non-routine manual work. The traditional assumption -- that automation climbs from manual to cognitive -- is inverting. Cognitive tasks are often lower-risk to automate because errors are detectable and recoverable. A wrong data analysis can be re-run. A wrong surgical incision cannot.
The Compliance Premium
The researchers propose a "compliance premium hypothesis": as AI capability grows, wage resilience will increasingly depend on an occupation's risk-absorption capacity rather than its skill requirements. Jobs that involve high-consequence decisions in regulated environments will command premiums not because they're hard, but because the cost of failure is high.
This reframes the policy question. The debate shouldn't be "which jobs can AI do?" but "which jobs will organizations risk letting AI do?"
Methodology
The team used a multi-agent LLM ensemble to score each of the 2,087 work activities on both axes, validated by a human expert panel using variance-based analysis. They then aggregated scores using a mathematical bottleneck model -- a single high-risk task in an occupation can anchor the entire OAI score low, even if most tasks are technically automatable.
The study covers 923 occupations across 32 pages with 4 figures.