AI Readiness Assessment: Templates and Scorecards for Department Leaders

A Director at a mid-market SaaS company was three weeks from launching an AI-assisted CRM workflow when she ran a readiness assessment on a whim. Not a formal one, just a quick survey she sent to her team of 22 asking them to describe what data lived in their pipeline fields.

Eleven people couldn't answer accurately. Six gave answers that contradicted each other. Three admitted they'd never opened the CRM fields she was planning to use as AI inputs.

She pushed the launch by six weeks. Used the time to fix the data hygiene issues her assessment revealed. The rollout that followed had a 78% adoption rate at 90 days, well above her department's previous tool adoption history.

The most common reason AI projects stall isn't budget or buy-in. It's starting without knowing what you're starting from. Gartner's research on AI project outcomes found that a significant portion of AI initiatives that fail do so not because of technical limitations but because foundational readiness — data quality, process consistency, team skills — wasn't assessed before deployment began. A team that thinks it's AI-ready because it uses one AI writing tool is very different from a team that's genuinely prepared for AI-native workflows. Before you start the assessment, it helps to understand the full sequence — from assessment through pilot to full rollout — that the running AI pilot programs guide and the change management playbook cover in detail.

This guide gives you every assessment tool you need: a skills survey with scoring rubric, a data readiness scorecard, a process audit template, a tools gap matrix, and a scoring guide that tells you what to do at each level.

What AI Readiness Actually Measures

Most readiness assessments only look at one dimension, usually skills. But AI readiness has four distinct dimensions, and a gap in any one of them can derail a rollout.

Dimension 1: Skills. Can your team use AI tools effectively? Do they understand prompt construction, output evaluation, and when not to trust AI-generated results?

Dimension 2: Data. Is the data your AI tools will need complete, accurate, consistent, and accessible? Bad data inputs produce bad AI outputs at scale. IBM's data quality research estimated that poor data quality costs US businesses $3.1 trillion annually — and AI systems that ingest low-quality data don't just underperform, they produce confidently incorrect outputs that are harder to catch than obvious errors.

Dimension 3: Processes. Are the workflows you want AI to assist actually documented and followed consistently? AI can optimize a process or automate it, but it can't make an undocumented, inconsistently followed process work better.

Dimension 4: Tools. Are your current tools AI-capable? Are you using the AI features you've already paid for? Where are the gaps?

Assessing all four before a rollout gives you a sequenced action plan. Assessing only skills gives you a training program that fails because the data wasn't ready.


Dimension 1: Skills Assessment

AI Literacy Levels

Before running the survey, calibrate expectations. AI literacy exists on a spectrum:

  • Aware: Understands AI tools exist, has used one or two. Can't describe how to get consistent output. Needs foundational training.
  • Capable: Uses AI tools regularly for specific tasks. Can write basic prompts, evaluate output quality, and identify when AI is wrong. Ready for workflow integration.
  • Proficient: Designs AI-assisted workflows, creates prompt templates for the team, trains colleagues informally. Ready for advanced use cases.
  • Advanced: Builds AI systems, evaluates tools, leads AI strategy for the function. Can design governance and measurement frameworks.

Most departments doing an initial readiness assessment will find a distribution across Aware and Capable, with a few Proficient individuals.

AI Skills Assessment Survey (12 Questions)

Distribute to all team members. Score individually, then aggregate.

Instructions: Rate yourself on each statement using the scale: 1 = Not at all / 2 = Somewhat / 3 = Mostly / 4 = Fully

# Statement Score (1-4)
1 I can write a prompt that consistently produces the specific output I need from an AI tool
2 I can tell when an AI-generated output is likely wrong or unreliable
3 I know which tasks in my daily work AI tools can meaningfully assist with
4 I understand the difference between AI-generated text that needs heavy editing vs. text that's usable
5 I can describe the data inputs that my AI tools use to generate outputs
6 I know how to give feedback to an AI tool to improve its outputs without starting over
7 I feel comfortable explaining how I use AI tools to my manager
8 I know how to handle a situation where AI output conflicts with my own judgment
9 I can identify which of my current processes could benefit from AI assistance
10 I understand the data privacy and security policies that govern AI tool use in my role
11 I have used AI to complete a work task that previously took significantly longer
12 I can teach a colleague the basics of how I use AI in my work

Individual Scoring Rubric:

Score Range Level Interpretation
12-24 Aware Needs foundational AI literacy training before workflow integration
25-33 Capable Ready for structured AI tool training with manager reinforcement
34-42 Proficient Ready for advanced use cases; consider for AI champion role
43-48 Advanced Candidate for AI lead, peer trainer, or governance involvement

Team Aggregate Scoring: Sum all individual scores, divide by headcount. Interpret using the same thresholds.

Skills by Role (minimum Capable level recommended):

Role Priority Skills for AI Readiness
Sales Rep Prompt writing for prospecting, AI CRM features, output evaluation
Sales Manager Pipeline AI review, coaching AI tools, forecast interpretation
Marketing AI content generation, campaign analytics, lead score understanding
Ops/RevOps AI reporting, workflow automation, data quality monitoring
Customer Success AI summarization, health score interpretation, ticket triage
Director/VP AI strategy basics, ROI evaluation, governance awareness

Dimension 2: Data Readiness Assessment

AI tools are only as good as the data they work with. This scorecard evaluates the four data quality dimensions that most affect AI performance.

Data Readiness Scorecard (10 Criteria)

Rate each criterion using: Red = Not met / Yellow = Partially met / Green = Fully met

Completeness

# Criterion Red Yellow Green
1 Key data fields required for AI use cases are filled in for 80%+ of records
2 Missing data has a documented process for collection or backfill

Accuracy

# Criterion Red Yellow Green
3 Data is validated at entry (required fields, format checks, deduplication)
4 Team members understand what "correct" data looks like for key fields
5 There is a process for identifying and correcting inaccurate data

Consistency

# Criterion Red Yellow Green
6 The same metric is calculated the same way across all systems and reports
7 Field naming conventions are standardized (no "leads" vs. "contacts" vs. "records" for the same entity)

Accessibility

# Criterion Red Yellow Green
8 The data AI tools need is accessible via API or direct integration (not locked in spreadsheets)
9 Team members who need to interpret AI output can access the underlying data
10 Data access permissions are documented and appropriate for AI tool integrations

Data Readiness Scoring:

Score Status Action
8-10 Green Ready Proceed with AI implementation. Monitor data quality post-launch.
5-7 Green (rest Yellow) Conditionally Ready Address Yellow items before full rollout. Pilot with existing high-quality data.
Any Red Not Ready Fix Red items before starting. Red items in Accuracy or Consistency will produce unreliable AI output.

Minimum data standards for common AI use cases:

AI Use Case Minimum Data Standard
Sales pipeline forecasting Deal stage, close date, deal value, activity history — 90%+ complete
Lead scoring Company size, industry, job title, engagement data — standardized field values
Automated reporting Consistent field definitions, no duplicate records, reliable timestamps
Customer health scoring Product usage data, support ticket history, NPS scores — 60-day minimum history
AI prospecting Contact data with email, company, title — validated and deduplicated

Dimension 3: Process Readiness Assessment

The "documented and followed" test is the most important question in AI readiness. Undocumented processes break AI implementations because there's nothing consistent to assist or automate.

Process Audit Template

For each key process your AI rollout will touch, complete one row.

Process Name Is It Documented? (Y/N) Is It Followed Consistently? (Y/N) Who Owns It? AI Candidate? Action Needed
Lead qualification Y/N/Maybe
Opportunity stage advancement Y/N/Maybe
Pipeline forecast update Y/N/Maybe
Email outreach sequence Y/N/Maybe
Meeting prep and summary Y/N/Maybe
Customer onboarding handoff Y/N/Maybe
Weekly report generation Y/N/Maybe
Customer health review Y/N/Maybe
Contract renewal workflow Y/N/Maybe
[Add your processes]

How to interpret AI Candidate status:

  • Y (Yes): Process is documented, followed consistently, and has clear inputs/outputs. AI can assist immediately.
  • Maybe: Process exists but is inconsistently followed or only partially documented. Fix documentation and consistency first; then introduce AI.
  • N (No): Process is undocumented, ad-hoc, or varies significantly by person. Don't introduce AI here. It will automate chaos.

The "documented and followed" test explained:

A process is documented if a new hire could execute it correctly from the documentation alone. A process is followed consistently if 80%+ of the team does it the same way, without exceptions driven by personal preference.

Run this test: pick three people on your team and ask them to walk you through the same process independently. If the three descriptions match, it's consistent. If they don't, you have a process gap that needs to be fixed before AI can help.


Dimension 4: Tools Gap Analysis

Many teams already have AI capabilities they're not using. Before budgeting for new tools, assess what you have. When you're ready to make the purchasing case for gaps you've identified, the AI tools stack guide for mid-market teams has the integration checklist and TCO calculator that turn gap findings into a vendor evaluation framework.

Tools Gap Matrix

Complete for every tool your department currently uses.

Tool Name Primary Use AI Capability Available? (Y/N) Currently Using AI Feature? (Y/N) Gap / Action
[CRM - e.g., Salesforce] Contact/pipeline management Y (Einstein AI) N Enable and train
[Email - e.g., Outlook/Gmail] Communication Y (Copilot/Gemini) Y Expand use cases
[Productivity - e.g., Notion] Documentation Y (AI blocks) N Pilot with 2 users
[Analytics - e.g., Looker] Reporting Y (AI summaries) N Assess for reporting workflow
[Video - e.g., Zoom] Meetings Y (AI notes) N Roll out to full team
[Sales engagement tool] Sequences Y (AI copy) Y Measure quality vs. manual
[CS platform] Customer management N Evaluate AI alternatives
[Add your tools]

Gap actions:

Gap Type What It Means Action
AI available, not using Quick win — AI capability you've already paid for Enable feature, add to training scope
AI available, partially using Optimization opportunity Standardize use and measure output quality
No AI capability Tool gap Evaluate AI-capable alternatives at next renewal
AI using, but output poor Data or prompt quality issue Audit data inputs and prompt templates

Running the Assessment: A Two-Week Approach

Week 1: Distribute and Collect

  • Day 1-2: Send the skills survey to all team members. Explain the purpose (planning AI investments, not evaluating performance). Set a deadline of 5 days.
  • Day 1-2: Have each team lead complete the data readiness scorecard and process audit template for their function.
  • Day 3-5: Compile tool inventory with IT's help. Fill in the tools gap matrix.
  • Day 7: Aggregate all responses. Calculate team skills score distribution. Tally data scorecard scores. Count AI candidate processes.

Week 2: Analyze and Workshop

  • Day 8-9: Prepare the readiness summary — dimension-by-dimension scores, key gaps, initial action priorities.
  • Day 10: Run the team readiness workshop (see agenda below).
  • Day 11-14: Finalize the 90-day readiness action plan based on workshop output.

Team Readiness Workshop Agenda Template

Duration: 2 hours | Attendees: Team leads + manager

Part 1: Skills Results (30 min)

  • Present the team's aggregate skills score and distribution
  • Highlight top 3 skills gaps from the survey
  • Discussion: What training or support do team leads think would move people most?

Part 2: Data and Process Findings (30 min)

  • Present data readiness scorecard results — Red/Yellow/Green summary
  • Present process audit results — how many AI candidates, how many "fix first" items
  • Discussion: Which data or process gaps need to be resolved before the AI rollout?

Part 3: Tools Gap Review (20 min)

  • Present the tools gap matrix — quick wins vs. tool gaps
  • Agree on which "AI available, not using" features to activate first

Part 4: Prioritization (30 min)

  • Rank the top 5 gaps by: impact on AI rollout x effort to fix
  • Assign an owner and a target date to each top-5 gap
  • Agree on the 90-day readiness sprint focus areas

Part 5: Communication Plan (10 min)

  • How will results be shared with the broader team?
  • What's the framing? (Readiness as growth opportunity, not a performance evaluation)

Scoring and Interpreting Results

After Week 1 aggregation, calculate an overall readiness score by dimension.

Scoring Guide with Action Triggers

Dimension Score Method Threshold Action
Skills Avg team score (12-48) <25: Foundational training first; 25-34: Structured training during rollout; 35+: Ready Train first, then roll out AI tools
Data Green criteria count (0-10) <5 Green: Fix before rollout; 5-7 Green: Pilot with clean data; 8+ Green: Ready Address Red items first — no workarounds
Process % AI Candidate processes <30%: High redesign need; 30-60%: Selective rollout; 60%+: Broad rollout viable Prioritize documented processes for first AI use cases
Tools % unused AI capabilities >50% unused: Quick wins available; focus here first Activate before buying anything new

Overall Readiness Level:

Level Definition Rollout Approach
Ready (3-4 dimensions strong) Proceed with planned rollout. Run full team deployment. Standard rollout with normal training support
Conditionally Ready (2 strong, 2 gaps) Proceed with targeted pilot. Fix gaps in parallel. Pilot with ready sub-team; fix gaps before full rollout
Not Ready (2+ major gaps) Hold rollout. Run a 90-day readiness sprint first. Gap-fixing phase before any AI tool deployment

Turning Assessment into an Action Plan

The assessment output is only valuable if it drives action. Use this framework to translate gaps into a prioritized sprint.

90-Day Readiness Sprint Framework

Priority Gap Area Specific Action Owner Timeline Success Metric
1 [Highest impact gap] [Specific fix] [Name] [Date] [How you'll know it's done]
2 [Second gap] [Specific fix] [Name] [Date] [Metric]
3 [Third gap] [Specific fix] [Name] [Date] [Metric]
...

Prioritization rule: Fix data gaps before skills gaps. A trained team working on bad data produces bad AI-assisted output confidently. Fix process documentation gaps in parallel with training. You need both ready at the same time. For the training investment itself, the AI training budget business case guide converts your skills gap scores directly into the ROI model format that finance needs to approve the spend.


Communicating Results to the Team

Assessment results need to be shared with the team, but how you frame them matters.

Do:

  • Frame gaps as growth targets, not deficiencies. "We have room to build skills in X" lands better than "half the team failed the skills assessment."
  • Be specific about what actions you're taking based on the results. "Based on what we found, we're prioritizing Y training and fixing Z data issues in the next 60 days" shows the assessment led somewhere.
  • Acknowledge that AI readiness is new for almost every team. A modest readiness score at the start is normal and expected.

Don't:

  • Share individual scores publicly. Team averages are useful; comparing individuals is counterproductive.
  • Over-interpret a single data point. The readiness assessment is a snapshot, not a final verdict.
  • Make the assessment feel like a performance review. It's a planning tool.

Repeating the Assessment

Run the assessment quarterly for the first year. After that, annually unless a major AI tool change triggers a fresh assessment.

What to track over time:

  • Team average skills score — should increase with training and practice
  • Data readiness score — should improve as data hygiene work takes effect
  • Process AI candidate rate — should increase as documentation improves
  • Unused AI capability rate — should decrease as you activate and adopt features

Tracking readiness over time creates a feedback loop: assessment → action → reassess → show progress. That progress visibility is also useful when you're making the case for continued AI investment. Deloitte's research on AI maturity found that organizations that formally measure AI readiness and re-assess regularly are significantly more likely to report positive AI ROI than those treating readiness as a one-time check.


Common Pitfalls

Assessments that never translate to action. The most common failure. You complete the assessment, identify the gaps, and then nothing happens because no one owns the action items. The workshop's prioritization step and named owners prevent this, but only if you actually hold people accountable.

Scoring too harshly. A team that scores mostly "Aware" on skills can feel demoralized if results aren't framed well. Be realistic about where teams start, and be clear that the point is to know what training to prioritize, not to judge current capability.

Assessing skills without assessing data and processes. This is the most expensive mistake. Training your team to use AI tools they won't be able to use effectively because the underlying data isn't ready wastes the training investment. Run all four dimensions.

Treating the assessment as a one-time event. AI capabilities and team skills both evolve. An assessment from 18 months ago is outdated. Build the quarterly cadence into your planning calendar.


What to Do Next

Share the assessment results with IT and HR as inputs for your annual planning cycle. IT needs to know which data integrations need to be built or fixed. HR needs the skills gap data to plan training investments.

And then actually start the 90-day sprint. The assessment is only worth the time it takes if it changes what you do in the next three months.


Learn More