Español

AI in the Enterprise AE Workflow

An enterprise AE I know sent a "personalized" pre-meeting briefing to a Fortune 500 CFO last quarter. Forty-five minutes of model-generated prose about industry tailwinds, the CFO's strategic priorities, and how the seller's platform aligned with the company's stated digital transformation goals. The CFO replied with one line.

"Did ChatGPT write this?"

Cc'd: the EVP. The deal didn't die that day, but it never recovered. A different vendor closed the contract eight months later. The wild part is the briefing wasn't wrong. It cited real earnings calls, real strategic priorities, real public statements. It was just generic enough that a senior buyer could smell the model behind it. And once a buyer concludes you don't put real thought into the relationship, every email after that gets read through the same filter.

This is the part of the AI conversation enterprise AEs don't get warned about. AI is genuine leverage. It will save you five to eight hours a week. It will make you faster at research, synthesis, and first drafts. And if you let it touch the wrong surfaces of your deal, it will quietly burn your credibility with the exact people you need to close.

Why This Matters Now

Enterprise buyers tolerate less AI slop than SMB buyers, not more. That's the part most sellers get backwards. The assumption is "executives are busy, they want concise." True. But concise is not the same as generic, and senior buyers can hear the difference in a sentence.

At the C-level, every word is read as a signal of how seriously you take their business. A generic AI-written executive email tells a CRO you don't know their company well enough to write three custom paragraphs. Which means, by extension, you probably can't run a complex deal cycle either. The risk isn't AI. The risk is unedited AI in front of senior people. AI you've reviewed, rewritten, and put your own thinking on top of is just a tool. AI you've forwarded straight from the model is a tell.

There's a related risk that's harder to see. AI is most useful for the parts of the EAE job that are mechanical: scheduling, transcription, research synthesis, first drafts. Those are real wins. But the parts of the job that actually close deals (being curious in a discovery call, reading a room of skeptical executives, knowing which stakeholder needs a private conversation before the steering committee) are not mechanical. If you outsource the mechanical 70% to AI, you have more time for the human 30%. If you try to outsource the human 30%, the deal slips.

Where AI Helps vs. Where It Hurts

Here's the practical decision framework I use. Three zones: green (let AI run), yellow (AI for first draft, you rewrite), red (do not let AI touch this).

Green zone, AI handles end to end:

  • Pre-call stakeholder research. Feed AI a LinkedIn profile, the most recent earnings call transcript, and excerpts from the 10-K. Ask it to surface three strategic priorities and two likely objections. You verify, but you start an hour ahead of where you would otherwise.
  • Post-call notes synthesis. Paste the raw Gong or Chorus transcript, ask AI to extract decisions, owners, next steps, and unresolved questions. Ten minutes instead of forty. This is one of the most reliable wins in the workflow.
  • Internal deal review prep. Pull your CRM data, paste it in, ask AI to flag the gaps in your MEDDPICC. It will catch things you missed because you're too close to the deal.
  • Proposal boilerplate. Company background, security overview, implementation timeline, mutual close plan structure. AI assembles the scaffolding. You spend your time on the custom value narrative, which is the only part the buyer actually reads carefully.

Yellow zone, AI drafts and you rewrite heavily:

  • Multi-thread cadence outreach. Generate first-draft emails for the 6-8 stakeholders in a deal, then rewrite each one yourself. AI gives you the skeleton. Your knowledge of the account makes it specific. There's a longer treatment of how this works in practice in Multi-Threading Enterprise Deals.
  • Objection prep. AI generates a list of likely objections; you stress-test them against what you actually heard in discovery.
  • Executive summaries. First draft from AI, then heavy human edit so it sounds like you, not like a model.

Red zone, do not let AI touch:

  • Executive comms unedited. Never send AI-generated email to a VP+ without heavy human rewriting. Senior buyers can spot the rhythm of model-generated prose in two sentences. When I send AI-drafted exec emails without rewriting them, I get ghosted. Every time.
  • Custom MSA red lines. AI hallucinates legal terms. Procurement and legal teams will catch it, lose trust, and slow the deal by two weeks while they re-verify everything you've sent.
  • Security questionnaire responses. Wrong answers create breach risk and false statements in writing. This is a job for security and InfoSec, not your AI assistant.
  • Discovery question generation. You lose the curiosity muscle that makes EAEs good. The discovery questions that win deals come from genuinely listening to the last call, not from a prompt template.

The EAE Prompt Library

Seven prompts I actually use. Paste these into Claude or ChatGPT, edit the bracketed inputs, and you have most of the AI workflow you need.

1. Stakeholder pre-read

You are helping me prepare for a 30-minute discovery call with [NAME], 
[TITLE] at [COMPANY]. 

I'm pasting in: their LinkedIn profile, the most recent quarterly earnings 
call transcript, and excerpts from the latest 10-K filing.

Output:
1. Three strategic priorities the company has publicly committed to in 
   the last 12 months, with the source for each.
2. Two likely objections this person will raise about a [your category] 
   purchase, given their role and the company's current financial profile.
3. One non-obvious connection between their stated priorities and a 
   challenge that [your category] solves — phrased as a hypothesis 
   I should test in the call, not a claim I should make.

Be conservative. If you cannot source a claim from the documents I 
pasted, say "no public source — verify before citing."

[PASTE LINKEDIN]
[PASTE EARNINGS TRANSCRIPT]
[PASTE 10-K EXCERPTS]

The "be conservative" clause is the important part. Without it, models invent strategic priorities that sound plausible and aren't. With it, you get a defensible starting point.

2. Post-call synthesis

I'm pasting a raw transcript from a 45-minute discovery call with 
[NAMES, TITLES] at [COMPANY].

Extract:
1. Decisions made during the call (who decided, what was decided)
2. Action items with owner and due date (if stated)
3. Open questions raised but not resolved
4. Buying signals — language that suggests urgency, budget, or 
   stakeholder readiness. Quote the exact phrase.
5. Risk signals — language that suggests hesitation, competing 
   priorities, or unresolved objection. Quote the exact phrase.
6. Three follow-up questions I should ask in the next conversation 
   based on what was said, not what I assumed.

Do not summarize the call. Extract only.

[PASTE TRANSCRIPT]

The "do not summarize" instruction matters. Models default to summary, which is the least useful output. You want raw extraction so you can verify against the transcript.

3. Multi-thread cadence draft

I'm running an active opportunity at [COMPANY]. The economic buyer 
is [NAME, TITLE]. I have champion-level relationships with [NAME, TITLE] 
and [NAME, TITLE]. I need to expand into [NAME, TITLE] (security), 
[NAME, TITLE] (IT), and [NAME, TITLE] (finance).

Context: [3-4 sentences on the deal — what they're solving, where 
they are in the process, what just happened on the last call].

Draft a first-touch email to each of the three new stakeholders. 
Each email should:
- Reference one specific business outcome relevant to that role 
  (security: risk reduction; IT: implementation lift; finance: TCO).
- Include one question only that role can answer.
- Be under 120 words.
- Avoid the phrases "circle back," "touch base," "leverage," 
  "synergies," and any sentence beginning with "I hope this finds 
  you well."

Each draft will be heavily rewritten before sending. Optimize for 
specificity over polish.

[PASTE CONTEXT]

The blocked-phrase list is what stops the model from defaulting to outreach prose. Try it without the block list and you'll see why.

4. Objection prep

Deal stage: [stage]. Buyer role: [TITLE]. Industry: [industry]. 
Deal size: $[amount] ARR.

Based on the typical concerns of someone in this role at this stage 
of an enterprise software purchase, generate the five most likely 
objections I will face in my next conversation.

For each objection:
1. The objection in the buyer's likely phrasing
2. The underlying concern (often different from the surface objection)
3. A response approach (not a script — the angle I should take)
4. The discovery question I should have asked earlier to surface this

Mark any objection that is genuinely about [your product] vs. about 
the broader category vs. about the buyer's internal politics.

[PASTE DEAL CONTEXT]

The third axis (objection about product vs. category vs. politics) is the one humans miss most often. It's also the one that determines whether the objection is yours to handle or your champion's.

5. Proposal narrative

I'm drafting the executive summary section of a proposal for [COMPANY]. 
This is the only section the [TITLE] economic buyer will read carefully. 
Maximum 250 words.

I'm pasting my discovery notes from four conversations with the 
deal team.

Output a first-draft executive summary that:
- Opens with the business problem in their words, not mine
- States the measurable outcome they told us they're trying to hit
- Names the two or three capabilities that map directly to that outcome 
  (skip features that don't map)
- Closes with what success looks like in 12 months

No marketing language. No "transform," "unlock," "empower," "leverage," 
"unleash." Match the prose style of a Harvard Business Review case 
study, not a sales deck.

[PASTE DISCOVERY NOTES]

The HBR style anchor is the most useful trick I've found for getting models out of marketing voice.

6. Mutual close plan

I'm building a mutual close plan with [COMPANY] for a [SIZE] deal 
targeting [DATE] close. Buying committee includes: [LIST OF NAMES, 
ROLES, INFLUENCE LEVEL].

Their procurement process typically requires: [LIST WHAT YOU KNOW].

Output a milestone framework with:
1. Workstreams (legal, security, technical validation, executive 
   alignment, commercial)
2. Key milestone in each workstream with target date
3. Owner on their side and our side
4. Dependencies between workstreams (what blocks what)
5. The three milestones most likely to slip and why

Format as a table I can paste into a shared doc with the buyer.

[PASTE DEAL CONTEXT]

This one I trust the least. The output is always a fine starting structure, but every deal has a quirk the model can't see. I rewrite probably half the rows before sharing externally.

7. Internal deal review (MEDDPICC gap)

I'm preparing for an internal deal review. Pasting CRM notes, 
recent email threads, and meeting summaries for opportunity [ID].

Run a MEDDPICC gap analysis. For each letter:
- What I know with confidence (with source)
- What I'm assuming but haven't verified
- The single highest-value question I should answer before next 
  Friday's review

Be skeptical. If I claim a champion exists but there's no evidence 
of independent advocacy in their behavior, flag it. If I have a 
metric but it's a marketing-page number, not something the buyer 
told me, flag it.

[PASTE NOTES]

The skepticism instruction is what makes this prompt actually useful. Without it, the model will agree with whatever you've already written in your CRM, which is exactly what you don't need before a deal review.

Common Pitfalls

Sending full-AI executive emails because you're behind on a Friday. Buyers remember the tone, not your week. The shortcut feels worth it for ten minutes; the buyer's mental file on you takes a year to repair.

Using AI to draft MSA red lines or security responses. These end up in legally binding documents. Expert reviewers will catch every error. Once the legal team flags hallucinated language, every line of the contract gets re-verified, which is where two-week deal slip turns into a four-week one.

Treating AI account research as ground truth. Model hallucinations about org structure, recent news, or financial performance will embarrass you live in the room. The fix is the simplest possible discipline: never cite a fact in front of a buyer that you haven't verified at the source.

Pasting confidential customer data into public AI tools. Assume any prompt is logged unless your company has an enterprise agreement. If you're working with a real account name, real revenue figures, or real org charts, use the enterprise-grade tool your security team approved, not the consumer ChatGPT account on your phone.

Letting AI write your discovery questions. You lose the curiosity muscle. The questions that move enterprise deals come from genuinely listening to what the buyer just said, not from a prompt template. There's a longer list of these in Enterprise AE Common Pitfalls.

Three Output Review Rules

Before any AI output leaves your machine, three checks. This takes thirty seconds and saves the deal.

  1. Is anything factually wrong? Specifically, any name, number, date, or claim about the buyer's business. If you can't source it, cut it.
  2. Would I be embarrassed if this person knew AI wrote this? If yes, it needs more rewriting. If genuinely no, it probably reads as you, not as a model.
  3. Does this sound like me, or does it sound like a model? Read the first paragraph out loud. Models have a rhythm. Tricolons, balanced clauses, the same five connector words. Your voice has bumps and asymmetry. If the prose is too smooth, rough it up.

Before vs. After: An Executive Email

Here's the unedited AI draft of an outreach email to a VP of Engineering at a mid-market software company:

Subject: Aligning on engineering velocity at [Company]

Hi [Name],

I hope this finds you well. I've been following [Company]'s recent growth and am impressed by the scale of your engineering organization. As you continue to scale, I'd love to explore how [Product] can help your team unlock greater velocity and deliver more value to your customers.

Many engineering leaders we work with face similar challenges around tooling fragmentation and developer experience. We've helped companies like [X] and [Y] streamline their workflows and accelerate time to market.

Would you be open to a 30-minute conversation next week to explore how we could partner together?

Best, [Sender]

It's not wrong. It's just a model email. Every senior buyer has read this exact email a thousand times.

Here's the version after rewriting:

Subject: The Q3 reliability post-mortem

[Name],

Saw your post-mortem on the Q3 reliability incident. The bit about test infrastructure being the actual bottleneck (not deploy frequency) was the most honest engineering write-up I've read this year.

Two questions, no pitch attached:

  1. The fix you described moves the bottleneck to environment provisioning. Has that played out in Q4?
  2. We work with three companies that hit the exact same wall in the 200-engineer range. Worth comparing notes for 20 minutes?

If yes, I'll send three concrete things they did. If no, no follow-up.

[Sender]

Same intent. Different probability of getting a reply. The second one took six minutes to write because the first three minutes were spent reading the post-mortem the buyer actually wrote.

Measuring Whether AI Is Helping or Hurting

Track three things over a quarter.

Admin time saved per week. Target 5-8 hours. Measure honestly. If you save four hours on synthesis and spend three of them on more synthesis, the net is one hour, not four.

Deal velocity in stages where AI does prep work. Target 15-20% faster movement through stages where the work is mostly research and synthesis (early discovery, proposal drafting, internal reviews). If those stages aren't getting faster, your AI workflow isn't actually working for you.

Executive engagement quality. Reply rates from VP+ contacts holding steady or improving, not degrading. The wrong metric is "emails sent." The right metric is whether senior buyers are responding more, the same, or less since you started using AI. If reply rates from senior buyers drop after you adopt AI, you're sending too much unedited model output. Cut volume, increase quality. There's a fuller breakdown of how to think about an EAE's tool stack in Enterprise AE Tools & Tech Stack.

How Rework Fits

Most AI tools live outside your CRM. The handoff between "AI generated this draft" and "this draft is in the deal record where the rest of the team can see it" is where time leaks. Rework CRM keeps the AI-assisted research, post-call synthesis, and stakeholder notes attached to the opportunity record itself, so the SE, the CSM, and the deal review committee see the same context you generated. AI as a separate productivity tool is fine. AI integrated into the deal record is genuinely faster. CRM starts at $12/user/month.

What to Take From This

AI is leverage, not a shortcut around the parts of the job that actually require an EAE. The senior buyer can always tell. The discipline is knowing which 70% of your week AI handles well, and protecting the 30% it can't. The 70% is research, synthesis, drafting, internal prep. The 30% is the human judgment that closes the deal — sitting with a skeptical CFO, knowing which stakeholder needs a private conversation, hearing what wasn't said in a discovery call.

If you ran this week back through the green/yellow/red framework, where would you find five hours of AI-assisted leverage? And which executive email did you send last quarter that, in retrospect, sounded a little too smooth?

Start there. The shape of an enterprise AE's day, including where AI sits in it, is laid out in more detail in A Day in the Life of an Enterprise AE.

Learn More