How (un)Stable Are LLM Occupational Exposure Scores?
Every major forecast about which jobs AI will eliminate comes from asking AI to rate itself. We found the answer depends entirely on which AI you ask. If the measurement is unstable, the policies built on it are too.
Key Findings
AI Exposure Scores Are Highly Fragile
Replicating the dominant rubric (Eloundou et al., 2024) with three frontier models on all 18,797 O*NET tasks, mean exposure diverges 3.6-fold. One model rated 14% of tasks as directly exposed; another rated 51%. Cohen's kappa = 0.36, indicating poor agreement.
Downstream Conclusions Flip
In difference-in-differences employment regressions, individual-level coefficient magnitudes vary 2.4-fold across annotators. At the county level, one model shows significant job losses while others show no significant effect. The research conclusion depends on which AI was asked.
Adoption Drives Capability Measurement
Occupations with higher observed AI usage show significantly larger increases in measured exposure across model generations (coefficient = 0.335, p < 0.05). The measurement instrument evolves with adoption, creating a feedback loop that systematically underrepresents communities with lower AI access.
Global Policy Built on Narrow Data
Only 16.3% of the global population has ever used generative AI. The BLS, OECD, ILO, IMF, and WEF all use these scores for employment projections affecting billions. We are making policy for the whole world based on data from 1 in 6 people.
Figure 3: E1 Exposure by Detailed Occupation and Annotator
95 three-digit SOC occupations ordered by cross-model disagreement. The largest spreads concentrate in supervisory roles and occupations that combine cognitive and physical tasks — precisely where the boundary between LLM-capable and non-LLM-capable work is most ambiguous. Spreads reach up to 77 percentage points on identical tasks, solely because a different model assigned the scores.
Yin, M., Vu, H., & Persico, C. (2026). Figure 3: E1 exposure by detailed occupation and annotator. In How (un)stable are LLM occupational exposure scores? Evidence from multi-model replication (NBER Working Paper No. 35110, p. 32). National Bureau of Economic Research.
https://riseilab.org/pub-ai-measurement.html#figure3
Read the Paper & Policy Brief
The Adaptive Precision Framework
If AI capabilities are a moving target, adoption varies enormously, and the measurement instruments are circular, what is the alternative? The companion policy brief proposes Adaptive Precision: using AI-enabled real-time data to continuously recalibrate what we teach, how we hire, how we design jobs, and how we deliver services.
Personalized Learning
Curricula adjusted each semester based on sector-specific adoption data.
Personalized Hiring
Assessment rubrics recalibrated as new populations adopt AI tools.
Personalized Job Design
Task bundles restructured continuously as AI capabilities evolve.
Personalized Services
Workforce development tailored to each community's adoption landscape.