When AI is moving faster than policy, measurement has to move too.
Our technology portfolio asks how emerging AI changes the calculus of employment, education, and opportunity — and what rigorous measurement, policy design, and workforce training require when the technology itself is in motion.
Four LLMs, 95 occupations, four different answers.
The AI occupational exposure scores currently driving workforce forecasts and policy design come from asking AI to rate itself. We tested GPT-4, Claude, Gemini, and Llama on the same 95 occupations. They diverged by up to 77 percentile points. Some occupations flipped sign entirely — low-exposure on one model, high-exposure on another.
Static measurement doesn't survive contact with the rate of model change. We propose the Adaptive Precision Framework: continuous recalibration of education, hiring, and workforce policy as exposure estimates evolve.
Read the paper Read the policy briefStatic AI exposure scores are not data. They are opinions, in disguise. Policy built on them is policy built on one model's judgment — on the day it was queried.— Yin, Vu & Persico 2026
Four active fronts in emerging technology.
Each thread is a live stream of research, in dialogue with the others — and with the workforce and education programs RISEI evaluates day-to-day.
AI Occupational Exposure Measurement
Stability, reliability, and validity of AI-based occupational and skill exposure scores. How to construct measurement that survives model change — and how to score the scores themselves. NBER Working Paper #35110.
Adaptive Precision Framework
A policy-design response to unstable AI measurement: continuous recalibration rather than freezing policy on any single model's guess.
AI Training & Tooling
Curriculum and custom tools for agency staff, VR counselors, and evaluation partners. Admin-data linkage, NLP coding, fidelity dashboards, literature review — all adaptable for partner grants.
Future of Work & Automation
Where AI, robotics, and automation change the nature of work for occupations on the margin — and what apprenticeship pipelines need to teach. Connects the lab's tech and workforce portfolios.
The training module behind AI for case managers.
From the research question to the classroom: a RISEI-led AI training curriculum deployed with Virginia DARS (Department for Aging and Rehabilitative Services) staff.
How counselors actually use the tools — written for case managers, not engineers.
Virginia DARS serves thousands of workers with disabilities through vocational rehabilitation. RISEI built a three-part training curriculum translating AI capabilities into the actual workflow of a VR counselor — intake, plan development, reporting. No jargon, no hype, just what the tool does and when to trust it.
The same curriculum pattern now powers RISEI's AI training for other federal and state partners. Bring us in at the proposal stage if you need this as a deliverable.
Evaluation partnership →Where this research shows up in the field.
Active lab projects where emerging-tech research intersects workforce, education, and evaluation practice.
How (un)Stable Are LLM Exposure Scores?
The flagship working paper. First empirical audit of occupational exposure-score stability across GPT-4, Claude, Gemini, and Llama on 95 occupations. Yin, Vu & Persico, 2026.
The Adaptive Precision Framework
Companion policy brief to NBER WP #35110. How to structure education, hiring, and workforce policy so evolving AI estimates don't lock in an obsolete guess.
Adjacent publications.
Work that intersects emerging technology research — digital access, online learning, and the economics of tech-enabled workforce programs.
Embedding AI or automation in a workforce or education grant?
RISEI serves as named evaluator on NSF, IES, and DOL grants that deploy AI tools, AI training, broadband interventions, or automation-facing workforce programs. The earlier we're in, the stronger the evaluation — and the proposal.
Start a partnership →