More in
AI Team Readiness Playbook
How to Audit Your Sales Team's AI Readiness
Apr 14, 2026
Building an AI Skills Matrix for Your Department
Apr 14, 2026 · Currently reading
90-Day Plan: From AI-Curious to AI-Fluent
Apr 14, 2026
AI Tools Training Playbook for Non-Technical Teams
Apr 14, 2026
Hiring vs Upskilling: Decision Framework for Directors
Apr 14, 2026
Setting Up an AI Champions Program in Your Department
Apr 14, 2026
Measuring AI Adoption ROI Across Your Team
Apr 14, 2026
AI Onboarding Checklist for New Hires in 2026
Apr 14, 2026
Building AI-Powered Workflows for Sales Teams
Apr 14, 2026
Building AI-Powered Workflows for Marketing Teams
Apr 14, 2026
Building an AI Skills Matrix for Your Department: Template and Guide
Here's how most AI training budgets get spent: someone in L&D signs up for a popular online course, licenses get pushed to everyone with a laptop, and six months later the completion rate is 23%. The people who needed it most didn't finish it. The people who didn't need it did.
The problem isn't the training. It's that nobody mapped what skills were actually required, who already had them, and where the real gaps were before the purchase order was signed.
An AI skills matrix solves this. It's a structured document that maps required AI proficiency levels by role, captures current state through assessment, and gives you a gap score for each person, role, and team. When it's built well, it turns "we need AI training" from a vague priority into a specific development plan.
This guide walks you through building one from scratch. By the end, you'll have a working template you can fill in for your department today.
Why This Matters Right Now
AI skills have a faster decay rate than almost any other capability in the workforce. What counted as advanced AI fluency 18 months ago is now table stakes. The roles themselves are shifting. Sales ops reps are being asked to automate reporting, marketers are running AI-assisted content workflows, and analysts are expected to query data with natural language tools they didn't know existed a year ago. McKinsey's Global Institute estimates that the half-life of specific technical skills is dropping below 2.5 years, making continuous skills reassessment essential rather than periodic.
Job descriptions are lagging. Most organizations still have job requirements written for a pre-AI world. That means you're hiring and developing people against outdated benchmarks, and the skills gaps you're creating today will show up in performance gaps within 12 to 18 months. The LinkedIn AI skills demand surge in 2026 makes this concrete: AI-adjacent skills are the fastest-growing requirement across both technical and non-technical job postings.
A skills matrix gives you a living document. Unlike a one-time survey or a training completion report, the matrix updates as roles evolve and people develop. It becomes the source of truth for decisions about training investment, hiring priorities, and team structure.
Building the Matrix: Step by Step
Step 1: Define the AI Skill Categories
Start by agreeing on what "AI skills" actually means for your department. Generic categories like "AI literacy" are too broad to assess or develop against. You need categories specific enough to connect to real job tasks.
These five categories cover most commercial and operational functions:
1. Prompt Engineering The ability to write clear, structured prompts that produce useful outputs from AI tools. This includes breaking down complex tasks, iterating on outputs, and knowing when a prompt needs more context versus a different approach. Not a technical skill. A communication skill.
2. Data Interpretation Reading and drawing conclusions from AI-generated analyses, reports, and visualizations. Understanding confidence levels, knowing when outputs need verification, and recognizing when a model is hallucinating or overreaching. Increasingly required even for non-analyst roles.
3. Workflow Automation Using AI tools to automate repeatable tasks: drafting communications, summarizing documents, triaging inboxes, generating reports. At higher levels, this includes connecting tools via integrations and building lightweight automations without engineering support.
4. AI Governance Understanding when and how to use AI responsibly within company policy. This includes data privacy awareness, knowing what can and can't be put into a public AI model, recognizing bias in outputs, and following compliance requirements. Critical for any role handling customer data or regulated information. (For teams building out the policy side, creating an AI governance policy for your department walks through the specific rules and approval workflows that governance-level roles need to understand.)
5. Tool-Specific Skills Proficiency in the specific AI tools your organization uses: CRM AI features, writing assistants, meeting summarizers, analytics platforms, and so on. These are role-specific and change as your tool stack evolves.
Add or remove categories based on what's actually relevant to your teams. A customer service department might add "AI-assisted escalation handling." A legal team might emphasize "AI document review" over workflow automation.
Step 2: Map Roles to Required Proficiency Levels
For each role in your department, define the required proficiency level for each skill category. Use three levels:
Aware: Understands what this skill is, why it matters, and can participate in basic activities with guidance. Doesn't need to be an independent practitioner.
Practitioner: Can independently apply this skill in day-to-day work. Produces good outputs reliably without needing coaching on basic tasks. The standard for most contributor roles.
Expert: Teaches others, handles edge cases, and drives adoption. Can adapt skills to novel situations. Builds new workflows or prompts that others use. The standard for leads, specialists, and senior roles.
Example: Sales Team Role Mapping
| Skill Category | SDR | Account Executive | Sales Ops | Sales Manager |
|---|---|---|---|---|
| Prompt Engineering | Practitioner | Practitioner | Expert | Practitioner |
| Data Interpretation | Aware | Practitioner | Expert | Practitioner |
| Workflow Automation | Aware | Practitioner | Expert | Aware |
| AI Governance | Aware | Aware | Practitioner | Practitioner |
| Tool-Specific Skills | Practitioner | Practitioner | Expert | Practitioner |
This is a starting point, not a final answer. Review the role mapping with managers who know what each role actually does day-to-day. The required level for a skill should reflect what the job demands, not what would be nice to have.
Step 3: Assess Current State
With required levels defined, you need to assess where people actually are. Use a two-step approach: self-assessment followed by manager validation. If you haven't already done a behavioral readiness check before this step, auditing your sales team's AI readiness gives you five observable dimensions that complement the skills matrix — together, they give a complete picture of where the team actually stands.
Self-Assessment
Ask each team member to rate themselves against each skill category using the Aware / Practitioner / Expert scale. Give them the definitions above so the ratings mean the same thing across people. Keep it short. This should take 10 minutes, not an hour.
Add one open-ended question per category: "What's a specific example of how you've used this skill in the last 30 days?" This forces people to ground their self-assessment in actual behavior, not aspirational self-image.
Manager Validation
Managers review each direct report's self-assessment and apply an independent rating. Disagreements are surfaced in a 15-minute conversation, not to argue but to align on what "Practitioner" actually means for that role. Manager ratings should be treated as the final score.
Research on self-assessed skills consistently shows that people overrate themselves in areas they find interesting and underrate themselves in areas they don't prioritize. Manager validation corrects for this. The goal isn't to challenge people. It's to get an accurate baseline. Studies on self-assessment bias document this pattern rigorously — the Dunning-Kruger effect is especially pronounced in emerging skill areas where individuals have limited peer benchmarks to calibrate against.
Step 4: Calculate Gap Scores
Gap score = Required level minus current level, per skill category per person.
If a role requires Practitioner (score: 2) and the person is currently Aware (score: 1), the gap is 1. If they're already at Practitioner, the gap is 0. If they're Expert and only Practitioner is required, the gap is -1 (a potential resource for teaching others).
Scoring the levels numerically:
- Aware = 1
- Practitioner = 2
- Expert = 3
Gap Analysis Summary Table: Sales Team Example
| Rep Name | Prompt Eng Gap | Data Interp Gap | Workflow Auto Gap | AI Governance Gap | Tool Gap | Total Gap |
|---|---|---|---|---|---|---|
| Rep A | 0 | +1 | +1 | 0 | 0 | 2 |
| Rep B | +1 | +1 | +2 | +1 | +1 | 6 |
| Rep C | 0 | 0 | 0 | 0 | -1 | -1 |
| Rep D | +1 | 0 | +1 | 0 | +1 | 3 |
Sort by total gap score to prioritize training investment. Rep B needs the most support. Rep C is a candidate to help teach others. But also look at patterns across the team. If everyone has a gap in Workflow Automation, that's a team-level training need, not an individual one. People with negative total gap scores — like Rep C — are strong candidates for an AI champions program, where their advanced fluency becomes an asset for accelerating the rest of the team.
Matrix Template: Blank Version
Copy this structure into a spreadsheet and fill in for your department.
Tab 1: Role Requirements
| Role | Prompt Engineering | Data Interpretation | Workflow Automation | AI Governance | Tool-Specific Skills |
|---|---|---|---|---|---|
| [Role 1] | A / P / E | A / P / E | A / P / E | A / P / E | A / P / E |
| [Role 2] |
Tab 2: Current State Assessment
| Name | Role | Prompt Eng (Self) | Prompt Eng (Manager) | Data Interp (Self) | Data Interp (Manager) | ... |
|---|---|---|---|---|---|---|
Tab 3: Gap Analysis
| Name | Role | Prompt Eng Gap | Data Interp Gap | Workflow Auto Gap | AI Governance Gap | Tool Gap | Total Gap |
|---|---|---|---|---|---|---|---|
Tab 4: Development Plan
| Name | Priority Gap | Recommended Action | Owner | Timeline | Status |
|---|---|---|---|---|---|
Common Pitfalls
Overcomplicating the proficiency levels. Five levels might feel more precise than three, but they create disagreement and slow down the assessment. Aware / Practitioner / Expert is enough resolution to drive good decisions. Add levels only if you have a specific reason to.
Skipping manager validation. Self-assessments without validation produce optimistic gap scores. When you build the development plan, you'll invest in people who don't actually have the gaps you think they do, and miss the people who do. Manager validation takes two hours and is worth it every time. Deloitte's human capital research on skills mapping shows that organizations combining self-assessment with manager validation achieve 30-40% more accurate gap identification than self-assessment alone.
Treating the matrix as a one-time exercise. The value compounds when the matrix is updated regularly. AI skills that were advanced six months ago are now table stakes. Schedule a full re-assessment every six months and a lightweight manager check-in quarterly. Block time for this now, not when you remember. The AI certification and credentialing market in 2026 is shifting fast enough that what qualifies as "Practitioner" in your matrix may need to change before the year is out.
Building the matrix for every role at once. Start with one team or function, complete the full cycle (requirements, assessment, gap analysis, development plan), and learn from it before expanding. Trying to do the whole organization in one sprint usually produces a half-finished matrix that nobody trusts.
Focusing the matrix only on people, not roles. The role requirements tab is as important as the individual assessment tabs. If your role requirements change (and they will), everyone's gap scores change automatically. Keep the requirements current.
Worked Example: Marketing Team
To make this concrete, here's a complete example for a small marketing team.
Role Requirements
| Role | Prompt Eng | Data Interp | Workflow Auto | AI Governance | Tool Skills |
|---|---|---|---|---|---|
| Content Writer | Practitioner | Aware | Practitioner | Aware | Practitioner |
| Marketing Analyst | Practitioner | Expert | Practitioner | Practitioner | Expert |
| Demand Gen Manager | Practitioner | Practitioner | Expert | Practitioner | Practitioner |
| Marketing Director | Aware | Practitioner | Aware | Expert | Aware |
Current State (manager-validated)
| Name | Role | Prompt Eng | Data Interp | Workflow Auto | AI Governance | Tool Skills |
|---|---|---|---|---|---|---|
| Sarah | Content Writer | Practitioner | Aware | Aware | Aware | Practitioner |
| James | Marketing Analyst | Aware | Practitioner | Aware | Aware | Practitioner |
| Priya | Demand Gen Mgr | Practitioner | Practitioner | Practitioner | Aware | Practitioner |
| Tom | Marketing Director | Aware | Aware | Aware | Aware | Aware |
Gap Scores
| Name | Prompt Eng | Data Interp | Workflow Auto | AI Governance | Tool Skills | Total |
|---|---|---|---|---|---|---|
| Sarah | 0 | 0 | +1 | 0 | 0 | 1 |
| James | +1 | -1 | +1 | +1 | -1 | 1 |
| Priya | 0 | 0 | -1 | +1 | 0 | 0 |
| Tom | 0 | +1 | 0 | +1 | 0 | 2 |
Read: Sarah needs workflow automation training. James needs prompt engineering and governance but is already strong on data and tools (potential coach for those areas). Tom needs data interpretation and governance, both relevant to his oversight responsibilities. Priya is fully ready and is a resource for workflow automation coaching.
Measuring Success
Percentage of roles with completed assessments: Target 100% for each department before starting the development plan. Partial data produces misleading gap scores.
Average gap score by team: Track this per skill category. If Workflow Automation gaps are improving quarter-over-quarter, your training investment in that area is working. If they're not moving, the training approach needs adjustment. MIT Sloan Management Review research on workforce learning finds that organizations tracking skill gap closure rate as a formal metric see 2x better training ROI than those tracking only completion rates.
Improvement quarter-over-quarter: Re-assess every six months. Calculate the change in average gap score per category. This is your skills ROI metric, the one number that tells you whether your AI development investment is paying off. Pair this with the framework for measuring AI adoption ROI across your team, which connects skills gap improvement to business outcomes leadership will actually fund.
Manager validation completion rate: If managers aren't completing validations, the matrix loses accuracy quickly. Track this separately and treat it as a manager accountability metric.
Reassessment Schedule
| Activity | Cadence |
|---|---|
| Full assessment (all roles, all skills) | Every 6 months |
| Manager check-in (spot-check individual progress) | Quarterly |
| Role requirements review (update required levels) | Every 6 months (or when major tool or role changes occur) |
| Development plan review | Monthly (as part of 1:1s) |
Put these dates in the calendar before you finish building the matrix. The most common reason skills matrices become useless is that nobody scheduled the follow-up.
Learn More
The skills matrix is the diagnostic. These guides help you act on what you find:
- How to Audit Your Sales Team's AI Readiness: A broader behavioral readiness audit before tool investment
- 90-Day Plan: From AI-Curious to AI-Fluent: The structured training plan to close gaps after assessment
- Hiring vs Upskilling: Decision Framework for Directors: When gaps are too large to train and you need to hire
- AI Skills Gap Executives Are Getting Wrong: Why most executives misdiagnose their team's actual skill deficits
- AI Non-Technical Job Postings Surge in 2026: What employers are now requiring even for non-technical roles

Co-Founder & CMO, Rework
On this page
- Why This Matters Right Now
- Building the Matrix: Step by Step
- Step 1: Define the AI Skill Categories
- Step 2: Map Roles to Required Proficiency Levels
- Step 3: Assess Current State
- Step 4: Calculate Gap Scores
- Matrix Template: Blank Version
- Common Pitfalls
- Worked Example: Marketing Team
- Measuring Success
- Reassessment Schedule
- Learn More