More in
AI Jobs & Skills News
The AI Certification Market Hit $4B — But Only a Handful of Credentials Signal Job Readiness
abr 14, 2026
Remote AI Roles Are Exploding — and Rewriting Where Companies Can Source Top Talent
abr 14, 2026
Workers with AI Fluency Are Commanding a 27% Salary Premium
abr 14, 2026
LinkedIn Data Shows AI Skills Demand Surged 142% in 12 Months
abr 14, 2026
Fortune 500 Companies Are Appointing Chief AI Officers at Record Pace
abr 14, 2026
Which Industries Are Hiring AI Talent Fastest in 2026
abr 14, 2026
The Replace vs. Augment Debate: What the Workforce Data Actually Shows
abr 14, 2026
The US National AI Talent Pipeline Initiative: What $2B in Federal Funding Means
abr 14, 2026
Bootcamps Are Producing More AI Graduates Than Universities
abr 14, 2026
AI Skill Requirements Are Now Showing Up in Marketing, Finance, and Legal Job Postings
abr 14, 2026
Companies Are Spending $1,800 Per Employee on AI Reskilling — Is That Enough?
The average company spent $1,800 per employee on AI reskilling in 2025. The companies that actually moved the productivity needle spent between $5,400 and $9,200.
That gap is the story. Not the $1,800 figure itself, but the 3-5x difference between what average organizations are spending and what organizations with measurable AI productivity gains are committing. And the way they're spending it is as different as the amount.
Josh Bersin's 2025 Global Workforce Intelligence Report, combined with LinkedIn Learning's 2025 Workplace Learning Report and ATD's AI Training Investment Survey, gives operations leaders the most complete picture yet of what AI reskilling actually costs, where the money goes, and what separates training investments that change workforce behavior from those that don't.
The $1,800 Benchmark in Context
The $1,800 per-employee figure comes from Josh Bersin's analysis of disclosed L&D budgets across 800 organizations. But to understand what it means, it helps to put it alongside total L&D spend: the average company spends about $1,275 per employee per year on all learning and development. AI reskilling in 2025 effectively represented a separate, additional budget line. Organizations aren't redirecting existing L&D spend. They're adding to it.
That context matters because it reframes the "is it enough" question. The real question isn't whether $1,800 per employee is adequate in absolute terms. It's whether that number is producing measurable outcomes — and the data suggests, at the median, it's not.
Bersin's analysis found that organizations spending $1,500-$2,500 per employee on AI reskilling showed an average 9% improvement in self-reported AI tool usage. Useful, but not transformational. The organizations showing 25%+ productivity improvement on measurable outputs — output volume, cycle time, error rates — were spending significantly more and structuring programs differently.
Where the Money Is Going
The $1,800 average breaks down roughly as follows across the organizations in the ATD dataset:
| Budget Category | Average Allocation | High-Performer Allocation |
|---|---|---|
| External courses / certifications | 38% ($684) | 21% ($1,134) |
| Internal program development | 22% ($396) | 31% ($1,674) |
| AI tool licensing (training-specific) | 18% ($324) | 19% ($1,026) |
| Manager and cohort coaching | 12% ($216) | 24% ($1,296) |
| Skills assessment and measurement | 10% ($180) | 5% ($270) |
The structural difference is visible in two line items: manager and cohort coaching, and internal program development.
Average organizations spend 12% of their AI training budget on manager-led and cohort-based learning. High performers spend 24%, twice as much, and proportionally a much larger dollar figure given total spend is 3-5x higher. That's not a coincidence. The program completion rate data explains it directly.
Completion Rates Tell the Real Story
ATD's survey data on AI training completion rates by program type is the most practically useful data point in the entire benchmark set:
| Program Type | Average Completion Rate |
|---|---|
| Self-paced online courses (external) | 23% |
| Self-paced internal learning paths | 31% |
| Manager-facilitated cohort programs | 74% |
| Embedded workflow training (tool-integrated) | 81% |
Self-paced external courses — the largest spend category for average organizations — complete at 23%. Less than one in four employees who start an AI certification actually finish it.
Manager-facilitated cohort programs complete at 74%. Embedded workflow training, where learning is built directly into the tools employees use daily, completes at 81%.
This is why high performers shift budget away from external courses and toward manager coaching and internal program development. It's not that external certifications are worthless. It's that the completion math means per-productive-outcome cost is far higher than it looks at sticker price. A $400 Coursera license that 23% of employees complete produces very different ROI than a $400-per-person cohort program that 74% complete.
For Heads of Operations designing reskilling programs, this data has a direct implication: program structure matters as much as content quality, and manager involvement is the primary driver of completion.
The ROI Calculation High Performers Are Using
Average organizations measure AI training ROI primarily through completion rates and satisfaction scores, neither of which correlates strongly with actual productivity change.
High performers measure differently. Three metrics appear consistently in the organizations Bersin identifies as producing measurable productivity gains:
Output-per-hour on AI-relevant tasks. For roles using AI writing tools, this might be content pieces per week. For analysts using AI modeling tools, it's models validated per week. For operations coordinators using AI workflow tools, it's process cycles completed per day. The key is measuring task-specific throughput before and after training, not general satisfaction.
Time-to-proficiency for new hires in AI tool environments. High performers track how long it takes new employees to reach competency benchmarks on AI tools in their role, and they use this metric to evaluate whether their onboarding program is working, not just whether new hires completed required modules.
Reskilling vs. rehiring cost comparison. This is the most direct budget justification metric. Workers with AI fluency command a 27% salary premium compared to equivalent non-AI-fluent hires. For a $75K operations role, that's roughly a $20K annual cost difference per hire. If reskilling an existing employee costs $4,500 and keeps them in role for three more years, the cost-of-waiting analysis is straightforward: reskilling at $4,500 vs. rehiring at a $60K three-year premium. The ROI math favors reskilling heavily, even at high-performer spending levels.
Two Examples of Measurable Reskilling ROI
Zurich Insurance Group (2025): Zurich deployed a manager-facilitated AI fluency program across its underwriting and claims teams, structured as eight-week cohorts with manager-led weekly review sessions. Spend: approximately $6,200 per employee in the program cohort. Outcome: 34% reduction in routine documentation cycle time and 28% improvement in underwriter output per day on AI-assisted assessments, measured at six months post-program. Zurich attributed $43M in annualized productivity improvement to the cohort that went through the program in 2025.
Kyndryl (2025-2026): The IT services firm built AI skill development directly into its delivery tool workflows rather than running standalone training programs, embedding micro-learning prompts inside the tools employees use daily. Completion rate for workflow-embedded modules: 84%. Outcome: 22% improvement in ticket resolution time and a measurable reduction in escalation rates for AI-assisted support workflows. Budget per employee: approximately $3,800, below the high-performer average — but program structure produced high-performer results.
The Kyndryl example is particularly instructive. They didn't spend at the top of the high-performer range, but their completion rate, driven by embedding training in workflow tools rather than separating it into standalone courses, produced outcomes comparable to much higher-spend programs.
What This Means If You're Designing a Program Now
The benchmark data points toward a few clear design principles for operations leaders:
Don't anchor on $1,800. That's the median of all organizations, most of which aren't producing measurable outcomes. If you're designing a program to drive actual productivity change, the relevant benchmark is $5,000-$9,000 per employee for roles where AI tool fluency has high impact, and zero for roles where it doesn't. Not every role needs the same investment.
Prioritize manager involvement over content volume. The completion rate gap between self-paced and manager-facilitated programs is 50+ percentage points. A smaller program with manager accountability will outperform a comprehensive program that 23% of employees finish. Build manager AI fluency first, before scaling to the broader workforce. An AI champions program is one way to build that manager-level capability quickly without requiring every manager to become an AI expert from scratch.
Measure outputs, not completion. Completion rates are easy to track and essentially meaningless as ROI indicators. If your program's success metric is "X% of employees completed Module 3," you're measuring activity, not change. Define what AI tool proficiency looks like in observable work output for each function, and measure that.
AI skill requirements are now showing up in non-tech job postings, in marketing, finance, legal, and operations, which means the external pressure to close the AI fluency gap is no longer confined to tech hiring. Under-investing in reskilling today means facing a wider skills gap in hiring markets across every function simultaneously.
What to Watch
Two forward-looking dynamics will shape the corporate reskilling market over the next 12-24 months.
First, ROI measurement standardization. Right now, every organization measures AI training ROI differently, or doesn't measure it at all. If the industry converges on standard output metrics (similar to how sales organizations standardized on revenue per rep), budget conversations will become easier to anchor. Josh Bersin's team is actively working on an AI Productivity Index framework that may accelerate this standardization.
Second, vendor consolidation. The AI training market is currently fragmented — hundreds of providers selling certifications, cohort programs, and platform-embedded learning with widely varying quality and outcomes. Consolidation is already underway, with LinkedIn Learning, Coursera for Business, and a handful of enterprise-focused platforms capturing growing share. As vendor options narrow, pricing and quality benchmarks will become more comparable — which should improve the signal-to-noise ratio on what actually works.
The AI certification market hit $4B in 2026, but the growth in spend hasn't been matched by growth in quality standards. For now, the most reliable indicator of a program that will actually move productivity is completion rate architecture — and the data says cohort and workflow-embedded programs win that comparison by a wide margin.
The $1,800 average tells you where the market is. The $5,000-$9,000 high-performer range tells you what it takes to actually close the gap.
Learn More
- Workers with AI Fluency Are Commanding a 27% Salary Premium
- The AI Certification Market Hit $4B — But Only a Handful of Credentials Signal Job Readiness
- AI Skill Requirements Are Now Showing Up in Non-Tech Job Postings
- How to Build the Business Case for AI Training Budget
- The Middle Management AI Obstacle — And How to Turn It Into an Opportunity
