Creating an AI Governance Policy for Your Department: A Director's Guide

A sales director at a mid-size SaaS company ran a quick experiment six months ago. She asked three reps to show her how they used AI day-to-day. Within ten minutes, she watched one rep paste a full call transcript (including the client's name, company, budget, and negotiation position) into a free public AI tool to generate a summary.

There was no data handling policy. Nobody had told the rep that was a problem. The rep wasn't being careless. He was trying to work faster, using the tool the way the tool was designed to be used. The gap wasn't the rep's judgment. It was the absence of any guidance.

That's where most departments are right now. Not reckless, but ungoverned. And ungoverned works fine until it doesn't. Until there's a client data incident, a compliance question, or a vendor contract clause about data processing that nobody read because nobody knew to look. Governance doesn't start with a policy document — it starts with knowing what AI tools your team actually uses. The AI readiness assessment templates include a tools gap matrix that surfaces shadow AI usage before it becomes a liability.

This guide gives you a practical framework for writing an AI governance policy at the department level. You don't need a legal team to get started. You don't need an IT project. You need a few structured decisions, documented clearly, communicated once, and reviewed quarterly.


Why Department-Level Governance Matters

Company-wide AI policies exist in many organizations. But they're written at a level of abstraction that doesn't help your team make daily decisions. Governance also needs to work alongside your cross-functional AI collaboration framework — when sales, marketing, and ops share data through AI tools, a department-level policy that doesn't align with adjacent teams creates gaps at the handoffs. "Use AI responsibly" doesn't tell a content manager whether she can use Claude to draft a client case study using internal project data. "Protect confidential information" doesn't tell a sales rep which AI tools are cleared for use with deal data.

Your team needs specifics. Which tools. Which data categories. Who approves exceptions. What to do when something goes wrong.

And "check with IT" isn't a policy. It's a delay mechanism that trains your team to either ask constantly (slowing everything down) or not ask at all (creating exactly the ungoverned behavior you want to prevent).

A department-level policy fills the gap between broad company guidelines and daily practice. It's yours to write, yours to enforce, and yours to update as tools and team habits evolve.


The Five Pillars of a Department AI Policy

Every department AI governance policy needs to address five things:

  1. Approved tools: Which AI tools your team can use, under what conditions, and which are prohibited
  2. Data classification rules: What data can and can't be put into AI tools, based on sensitivity tier
  3. Output review standards: Which AI outputs need human review before use, and what that review covers
  4. Employee training requirements: What every team member must know before using AI tools for work
  5. Escalation and incident process: What constitutes a policy breach, how to report it, and how it gets resolved

The sections below walk through how to build each pillar. By the end, you'll have everything you need to write a one-to-two-page policy document your team can actually use.


Step 1: Define Your AI Tool Inventory

The first decision is the tool list. Not tools your team theoretically might use, but tools they're using right now, or that you want to formally approve.

Start with a quick survey. Ask your team which AI tools they currently use for work tasks. You'll find more than you expect. Most teams have three to five tools already in rotation that nobody has formally reviewed.

Then classify each tool into one of three tiers:

Approved: Cleared for use with the data categories defined in your policy. No additional approval needed.

Conditional: Can be used, but only under specific conditions (e.g., no client data, no financial data, only for internal drafts).

Prohibited: Not to be used for work tasks under any circumstances. This typically includes public AI tools with no data processing agreement.

AI Tool Inventory Template

Tool Use Case Data Tier Allowed (see Step 2) Approved By Review Date
Claude (Anthropic — enterprise tier) Drafting, summarizing, research Internal, Public-safe Director name 2026-07-01
ChatGPT (OpenAI — enterprise tier) Drafting, analysis Internal, Public-safe Director name 2026-07-01
ChatGPT (free tier / personal account) None Prohibited
Otter.ai (business plan) Meeting transcription Internal Director name 2026-07-01
Generic AI chatbot (no DPA) None Prohibited
Grammarly (Business) Editing and tone Internal, Confidential Director name 2026-07-01

The "Review Date" column is not optional. Tools change their data handling practices. A tool that was safe last year may have updated its terms. Set every approved tool on a six-month review cycle at minimum.

One practical note: the difference between "enterprise tier" and "free/personal tier" matters enormously for data handling. Most enterprise plans include a Data Processing Agreement (DPA) that prohibits the vendor from training on your data. Free tiers often don't. Know which tier your approved tools are on. The EU AI Act implementation guidance from the European Commission provides a useful reference framework for classifying AI tools by risk tier, which maps closely to the data sensitivity classification system in this guide.


Step 2: Classify Your Data by AI Exposure Risk

Your team handles different kinds of data, and not all of it has the same risk profile when it enters an AI tool. You need a classification system that's simple enough for a non-technical manager to apply in the moment.

Use four tiers. NIST's AI Risk Management Framework uses a similar tiered approach to data classification, grounding risk levels in the actual consequences of AI misuse rather than abstract sensitivity labels — a framing that translates well when explaining data tiers to a non-technical team.

Public-safe: Information already in the public domain or with no confidentiality expectation. Company name, public product descriptions, general market research. Can go into any approved tool.

Internal: Information that's confidential within the company but not regulated or client-specific. Internal strategy documents, team OKRs, project plans, internal meeting notes. Can go into approved tools with a DPA, not free-tier tools.

Confidential: Client names, deal values, contract terms, contact information, financial data, performance reviews. Requires an enterprise-tier tool with a signed DPA. Cannot go into free-tier tools, personal accounts, or any tool not on the approved list.

Regulated: HIPAA-covered health information, PII subject to GDPR, payment card data, legally privileged communications. Requires explicit legal/compliance review before any AI tool use. When in doubt, don't use AI. Escalate first.

One-Page Data Classification Guide for Your Team

Post this somewhere visible (Notion, Slack pinned post, team wiki):

  • Writing a blog post draft? Public-safe. Use any approved tool.
  • Summarizing a meeting about internal product roadmap? Internal. Use enterprise-tier approved tools only.
  • Generating follow-up email using a client's company name and deal size? Confidential. Only approved enterprise tools with a DPA.
  • Working with a client's healthcare or financial records? Regulated. Stop and ask before using AI.

If your team is unsure where a data type falls, the rule is: treat it as one tier higher than your first instinct. The cost of over-caution is a slightly slower workflow. The cost of under-caution is a data incident.


Step 3: Set Output Review Standards

Not all AI output carries the same risk. A rough internal agenda generated by AI has a low cost if it's wrong. A client proposal with incorrect pricing generated by AI has a high cost.

Define which output types require human review before use, and what that review covers.

Output Review Matrix

Output Type Review Required? What to Check
Customer-facing copy (proposals, emails, reports) Yes — mandatory Factual accuracy, brand voice, no client data errors, pricing verified
Internal documents (meeting summaries, project updates) Spot-check Factual accuracy, no sensitive data inadvertently included
Financial summaries or forecasts Yes — mandatory Numbers verified against source, no fabricated data points
HR communications (offers, performance feedback) Yes — mandatory Legal/HR review before sending
Research summaries for internal use Spot-check Sources cited or verifiable, no fabricated statistics
First-draft content for further editing No mandatory review Standard editing process applies

The mandatory review tier is non-negotiable. These are the categories where AI hallucination has real consequences: for clients, for compliance, or for legal exposure.

Build the review step into the workflow. Not as an option, but as a named checkpoint with a named reviewer. If it's not in the workflow, it won't happen consistently.


Step 4: Define Employee Responsibilities

Every member of your team who uses AI for work tasks needs to know three things. Getting there requires intentional training — the AI tools training playbook for non-technical teams has a format specifically for communicating data rules and output review standards to employees who don't have a technical background.

  1. Which tools are approved and which are prohibited
  2. What data they can and can't put into AI tools
  3. What to do if they make a mistake or aren't sure

Document this as an acknowledgment checklist. When a new team member joins, or when you roll out this policy, have everyone sign it.

Employee AI Policy Acknowledgment Checklist

By signing below, I confirm that I have read and understood the department AI governance policy and agree to:

  • Use only approved tools from the department tool inventory
  • Not use personal or free-tier AI accounts for work tasks
  • Follow data classification rules before inputting any data into an AI tool
  • Complete human review for all mandatory-review output types before use
  • Report any suspected policy breach or data handling error to [designated owner] within 24 hours
  • Complete the department AI training session before using AI tools for client-facing work
  • Review the tool inventory and data classification guide when I'm uncertain about a specific case

Name: _______________ Date: _______________ Manager: _______________

Keep these on file. If a policy breach occurs, the acknowledgment checklist is part of the record that training was provided and expectations were clear.


Step 5: Write the Incident Escalation Process

Policy breaches happen. The goal isn't zero incidents. It's fast containment and clear accountability when they occur.

Define three things: what counts as a breach, who to notify, and how to log it.

What counts as a policy breach:

  • Using a prohibited tool for any work task
  • Inputting Confidential or Regulated data into a non-approved tool or personal account
  • Sharing AI-generated customer-facing output without completing mandatory review
  • Failing to report a known or suspected breach

When a breach is suspected:

  1. Stop using the tool immediately
  2. Notify your direct manager and the designated policy owner within 24 hours
  3. Document what happened: which tool, what data, when, and how the error was discovered
  4. Do not attempt to delete or conceal the incident

Who to notify:

  • Immediate manager (always)
  • Department policy owner (always)
  • Legal/compliance (if Regulated data was involved)
  • IT security (if there's reason to believe data was exposed externally)

How to log it: Keep a simple incident log in a shared document or ticketing system. Fields: date, team member, tool involved, data classification, description of incident, resolution steps taken, date resolved.

The log matters for two reasons. First, it creates accountability and shows due diligence if the incident is ever reviewed externally. Second, it surfaces patterns. If the same tool appears in three incidents, that's a signal to reassess its approval status.


Step 6: Write the Policy in Plain Language

The policy document itself should be one to two pages. Not a legal brief. Not a 20-page compliance manual. A document your team will actually read.

AI Governance Policy Template


[Department Name] AI Governance Policy Effective Date: [Date] | Review Date: [Date] | Owner: [Name]

Purpose This policy sets guidelines for AI tool use in [Department Name] to protect client data, meet company standards, and ensure AI-generated output meets quality requirements.

Approved Tools See the AI Tool Inventory [link]. Use only tools listed as Approved or Conditional. Prohibited tools may not be used for any work task, regardless of the reason.

Data Handling Rules Before inputting any data into an AI tool, check the Data Classification Guide [link]. Do not input Confidential or Regulated data into any tool without a signed Data Processing Agreement (DPA). When in doubt, treat data as one tier more sensitive than your first instinct.

Output Review Requirements AI-generated output for customer-facing content, financial summaries, and HR communications requires human review before use. See the Output Review Matrix [link] for details.

Your Responsibilities By using AI tools for work tasks, you agree to follow this policy. Complete the Employee Acknowledgment Checklist before using AI for client-facing work.

If Something Goes Wrong Report suspected breaches within 24 hours to [manager name] and [policy owner name]. See the Escalation Process [link] for full steps.

Questions Contact [policy owner name] at [contact].


Keep it under 500 words. Link to the detailed templates rather than embedding them. The policy document is the entry point, not the encyclopedia.


Step 7: Set a Review and Update Cadence

This policy will be wrong within six months. Not because you wrote it badly, but because tools change, teams change, and AI capabilities change faster than any static document can track. The AI tools stack selection framework includes a quarterly stack audit process — running both reviews on the same cadence prevents tool inventory drift from creating governance gaps.

Build in a quarterly review:

Quarterly Review Triggers:

  • Any new AI tool added to or removed from the approved list
  • Any incident in the incident log
  • Any change to company-wide AI policy that affects department specifics
  • Any significant update to a tool's terms of service or data handling practices

When an employee brings in a new tool: This happens constantly. Someone finds a productivity tool, starts using it, and mentions it in a meeting three months later. Don't wait for the quarterly review. Set an expectation that new tools require a policy owner review before use for work tasks, and make the review process fast (target 48 hours, not two weeks).

Who triggers the review: The policy owner, on a calendar reminder. Not "when we get around to it." Block 90 minutes quarterly.


Common Pitfalls

Policies that prohibit AI entirely and drive shadow usage. If your policy response to AI risk is a blanket ban, your team will use AI anyway, just without telling you. Gartner research on AI governance in enterprise settings found that blanket AI prohibitions increase shadow AI usage by 40% within 90 days, as employees find workarounds while the organization loses visibility into actual risk exposure. Shadow AI is more dangerous than governed AI. Build a permissive approved list and a clear prohibition list. Make it easy for people to stay in bounds.

Data tiers that are too vague. "Sensitive data" isn't a tier. "Client names and deal values" is a tier. The more specific your data classification language, the easier it is for team members to apply without asking every time.

No accountability owner. A policy with no named owner is a policy nobody enforces. Assign one person who is responsible for the tool inventory, the incident log, and the quarterly review. It doesn't need to be a full-time role, but it needs to be one specific person.

Treating the first version as permanent. The first version of your policy is a starting point. Expect to update it within 90 days as you learn how your team actually uses AI. Build that expectation into how you communicate the policy at launch.


What to Do Next

Before you roll this out to the full team, share the draft with two people: someone from legal or compliance, and someone from IT security. Not for a full review, just a 30-minute alignment conversation. Ask them two questions: "Does anything in here conflict with company policy?" and "Are there any data handling rules we've missed?"

That conversation takes 30 minutes and it prevents a much longer conversation later. Once you have their input, finalize the policy, run the acknowledgment checklist session with your team, and set your first quarterly review date.


Learn More