Português

Tagging AI Initiatives: A Practical Taxonomy for Operators

Tagging schematic — capability tags attached to a portfolio spreadsheet

Meet Rachel. She runs operations for a 140-person professional services firm. Revenue is solid. Her team is sharp. And over the past eighteen months, AI tools have been quietly multiplying.

It started with her sales team piloting an AI outreach tool. Then support bought a chatbot. Finance started using an AI expense classifier. Marketing picked up a content generation tool. Her Head of Technology did a proof-of-concept with a document extraction system for client onboarding.

It came to a head when the CEO asked her to present "the firm's AI strategy" at the next board meeting. Rachel opened a spreadsheet. Forty minutes in, she had nine initiatives, three of them probable duplicates, four without a named owner, and no idea how to explain any of them in terms the board would understand.

She didn't need a strategy deck. She needed a vocabulary. A way to describe what each initiative did, compare it to the others, and explain the whole portfolio in plain language.

This article is for Rachel. And for every operator managing more AI initiatives than they can track on a whiteboard.

Why tracking breaks down without a taxonomy

The natural instinct is to catalog AI initiatives by vendor name ("we have Gong, Intercom, and that expense thing") or by team ("sales has two tools, support has one"). That works at three initiatives. It breaks at eight.

Without a shared taxonomy, you end up with:

  • Two teams running nearly identical Generate tools because no one compared them
  • An Execute workflow with no human review that someone assumed had one
  • Zero Predict capability despite paying $40K annually for "AI-powered analytics"
  • Five initiatives in sales, none in Customer Success, and no one noticed the imbalance

Tagging makes your portfolio legible. Legible portfolios get managed. Unmanaged portfolios drift, duplicate, and decay.

The 7 tagging fields

The ACE Framework gives you the vocabulary. These seven fields turn that vocabulary into a practical catalog.

1. ACE capabilities used. Which of the five core capabilities does this initiative use? Ingest takes in raw data. Analyze classifies and extracts meaning. Predict scores and forecasts. Generate drafts text, images, or code. Execute sends, commits, updates, or routes. An AI outreach tool is Ingest + Analyze + Generate. A fraud detection system is Ingest + Analyze + Predict + Execute. Knowing which capabilities are active tells you what category of governance an initiative needs.

2. Data types consumed. What kind of data types does this initiative run on? Text / Structured / Image / Audio / Video / Code / Time-series. Most initiatives run on one or two. The field reveals whether your data readiness is actually in place.

3. Dominant pattern. This is the bridge to Level 2 of the ACE Framework. About ten patterns cover 90% of business AI use cases:

  • RAG Assistant (Ingest + Analyze + Generate): internal knowledge Q&A
  • Scoring + Routing (Ingest + Analyze + Predict + Execute): lead triage, ticket prioritization
  • Vision Extract (Ingest + Analyze + Execute): OCR-to-database for invoices, contracts
  • Meeting Intelligence (Ingest + Analyze + Generate + Execute): call summaries written back to CRM
  • Anomaly Agent (Ingest + Analyze + Predict + Execute): fraud detection, spend alerting

4. Human-in-loop role. Where does a human sit in the workflow? Review gate (approves before action), Monitor (sees output but doesn't approve each instance), or Absent (AI acts without any checkpoint). Most organizations discover during tagging that they have more "absent" loops than they thought. That's the insight that triggers a governance conversation.

5. Business function. Sales / Support / Marketing / Finance / HR / Operations / IT / Product / Legal. This field reveals coverage gaps fast. If seven of your ten initiatives are in Sales and zero are in Customer Success, that should be a deliberate choice, not an accident.

6. Stage. Idea / Pilot / Rollout / Scaled / Sunset. Stage visibility prevents the most common portfolio failure: initiatives stuck in "pilot" for 18 months with no one owning the decision to scale or kill.

7. ROI status. Unmeasured / Measured positive / Measured negative. Most organizations discover that the majority of their AI initiatives are unmeasured. That's not a failure. It's information. And you can't manage what you can't see.

The tagging template

Here's a practical format you can drop into a spreadsheet or Notion database.

initiative: "Auto-draft sales outreach"
ace_capabilities: [ingest, analyze, generate]
data_types: [text, structured]
pattern: ad-hoc
human_in_loop: review-gate
function: sales
stage: pilot
roi_status: measured-positive
owner: "Sarah Chen, Sales Ops"
notes: "Reps approve each email before sending. ~40 min saved per rep per week."

Run this for every initiative. Even the ones you're unsure about. Partial information is still more useful than none.

Why this works

Comparability. Once every initiative uses the same vocabulary, you can sort, filter, and group them. "Show me everything using Execute with no human review" is a question you can answer in 30 seconds. Without tags, it's a 45-minute conversation.

Overlap detection. Two teams running Generate + Text initiatives are often doing the same thing with different vendors. The tag view makes that visible. Whether you consolidate is a business decision. But you should make it deliberately.

Gap identification. Zero Execute tools means all your AI gains are coming from drafts and summaries. Someone still does every consequential action manually. That might suit your risk tolerance. But it's worth knowing.

Executive reporting. "Our AI portfolio: 6 Generate tools, 2 Predict, 1 Execute with a review gate, 3 unmeasured" is a one-sentence board update that means something. Compare that to listing vendor names.

Tagging in practice: a 10-initiative audit

Here's what a real-ish portfolio audit looks like. This is a composite of patterns we see regularly, not a single company's data.

The portfolio before tagging:

# Initiative Function What we think it does
1 Gong Sales Records calls
2 Intercom Fin Support Answers tickets
3 ChatGPT (team license) Marketing Content drafts
4 Jasper Marketing Blog content
5 HubSpot AI Sales Lead scoring
6 Ramp AI Finance Expense categorization
7 Otter.ai Operations Meeting notes
8 DocuSign Lumi Legal Contract review
9 "The invoice bot" Finance Unclear
10 Notion AI Product Writing assist

After tagging:

# ACE Capabilities Pattern Human-in-loop ROI
1 Gong Ingest, Analyze, Generate, Execute Meeting Intelligence Monitor Measured positive
2 Intercom Fin Ingest, Analyze, Generate RAG Assistant Review gate Unmeasured
3 ChatGPT Analyze, Generate Ad hoc Absent Unmeasured
4 Jasper Analyze, Generate Ad hoc Absent Unmeasured
5 HubSpot AI Ingest, Analyze, Predict Scoring + Routing Monitor Unmeasured
6 Ramp AI Ingest, Analyze, Execute Vision Extract Review gate Measured positive
7 Otter.ai Ingest, Analyze, Generate Meeting Intelligence Monitor Unmeasured
8 DocuSign Lumi Ingest, Analyze, Generate RAG Assistant Review gate Unmeasured
9 Invoice bot Ingest, Analyze, Execute Vision Extract Absent Unmeasured
10 Notion AI Generate Ad hoc Absent Unmeasured

What the audit revealed:

Seven of the ten initiatives use Generate. The company is heavily invested in summarization and content drafting, and almost nothing else. Nobody made that decision consciously.

Initiatives 3 and 4 (ChatGPT + Jasper) are both Analyze + Generate for marketing content. Two vendors, same pattern. A consolidation conversation is warranted.

Initiative 9, the "invoice bot," is an Execute workflow with no human-in-loop. Nobody flagged it because the name sounded harmless. The tagging exercise surfaced it.

Zero initiatives serve Customer Success. Eight of ten are unmeasured. The two that are measured, Gong and Ramp AI, both had ops owners who set baselines before launch. That's not a coincidence.

The surprising finding: this company thought it had a diverse AI portfolio. It had a writing tool portfolio with a few analytics add-ons.

How to run the tagging exercise

You don't need a consultant. You need 30 minutes with the right people in the room: your operations lead, whoever owns the tech stack, and one representative from each major function with an active AI initiative.

Pull up a shared spreadsheet. Work through each initiative together, live. Don't send it out async and collect responses. The conversation that happens during tagging is where the value is. Disagreements about how to classify something are the most useful moments. If two people can't agree on whether something is "Pilot" or "Rollout," that's a sign the initiative doesn't have clear ownership. Note it. That ambiguity is data.

Thirty minutes covers 8-12 initiatives if someone is facilitating. The first few take longest as the team learns the vocabulary. It gets faster.

After the exercise, generate the aggregate view. How many initiatives per capability? Per function? What's unmeasured? Look for the surprises. Those are the agenda items for your next AI strategy conversation.

Quarterly cadence and when to stop

Tag once. Then re-tag quarterly. A pilot becomes a rollout. A rollout gets sunset when a better tool arrives. Stage and ROI status go stale fast. A quarterly re-tag takes 15 minutes if the original session was thorough. You're just updating the deltas: new initiatives, stage changes, newly measured ROI.

But tagging has overhead. If you have one or two AI tools, skip it. You need a decision-making conversation, not a taxonomy. And if tagging is becoming bureaucratic (fields added for completeness, not insight), stop. Cut back to the fields your team actually uses. A four-field spreadsheet reviewed quarterly beats a twelve-field database that nobody opens.

The right signal that tagging is working: people use the vocabulary in other conversations. When your Head of Finance says "that's an Execute workflow, we need a review gate," the taxonomy has become part of how the team thinks. That's the goal.

The meta-benefit: tagging as education

Running the tagging exercise does more than produce a spreadsheet. It teaches your leadership team to think about AI in structured terms.

Teams that tag their initiatives consistently build AI literacy faster than teams that don't. By the time you've classified ten initiatives using ACE vocabulary, your Head of Operations understands the Generate vs. Execute boundary better than most people who've read five AI books. Your finance lead understands why the invoice bot needed a review gate. Your Head of Sales understands why the lead scoring tool isn't delivering because clean data was never part of the deployment.

You get a portfolio view. You also get a team that asks better questions of vendors, spots overlap earlier, and governs AI workflows with more precision. That's the narrow claim for tagging. It's a real one.

What's next: Level 2 Patterns

This collection introduced the ACE Framework and its foundation. Tagging is the bridge out of it.

Once your team can tag initiatives using ACE vocabulary, the natural next step is understanding the patterns those initiatives follow. The ACE Framework organizes these into Level 2: ten recurring capability combinations that appear across industries and functions. If you found Meeting Intelligence or Scoring + Routing in your portfolio audit, the next collection will feel like a formal name for something you've already been living with.

Related reading in this collection:

Tagging is the move from "we're trying some AI" to "we have an AI portfolio." That's not a small shift. A portfolio can be managed, measured, and improved. A collection of experiments just accumulates.

Start this quarter. The exercise takes 30 minutes. The clarity lasts.