Português

AI in BDR/SDR Prospecting: What to Use, What to Ignore

A rep on a team I worked with last year started running every cold email through ChatGPT. Every single one. He'd paste in the prospect's LinkedIn, the company's About page, and his rough draft, and ask the model to "make it sharper." The output looked great. Punchy openers, specific references, a clean ask at the end.

His reply rate fell from 4% to under 1% in three weeks.

When we sat down to figure out why, the answer was obvious in hindsight. Every email sounded great. They also sounded like every other email being written that week by every other rep doing the exact same thing. Buyers couldn't tell them apart. The personalization was technically correct and emotionally generic, the worst possible combination for a cold email.

That's the central problem with AI in prospecting right now. The reps who learn to direct AI sharply — knowing what to delegate, what to keep manual, how to edit ruthlessly — outperform the reps who just hand it the wheel. Delegation produces volume. Direction produces pipeline. This guide is about the difference.

Why This Matters: AI Is Shifting BDRs From Doing to Directing

The job is changing. A BDR five years ago spent maybe 60% of their day on manual research and writing, and 40% on outreach and live conversations. Today, AI tools can credibly compress that research-and-writing portion to 15-20% of the day. That's real. That's not hype.

The question is what you do with the time you save.

Reps who use AI well take the saved hours and pour them into the things AI can't do: better account selection, sharper qualification on live calls, deeper relationships with the 30 accounts they actually care about. Reps who use AI badly just send more bad emails faster. Volume goes up, reply rates go down, and the team starts blaming "deliverability" or "buyer fatigue" when the real problem is that they outsourced their judgment.

For a broader view of the tools that actually earn their seat in the modern BDR stack, see The BDR/SDR Tech Stack: Tools That Earn Their Seat. For how this fits into a daily workflow, A Day in the Life of a BDR/SDR is worth reading alongside this one.

Where AI Earns Its Keep

1. Research Compression

This is the single highest-ROI use of AI in prospecting. Full stop.

Before AI, reading a 10-K or skimming an earnings call transcript for context on one account took 25-30 minutes if you did it well. Multiply that across 20 accounts a week and you've burned 8-10 hours just on research. Most reps gave up and went in cold, which is why so many cold emails look like they were written by someone who's never heard of the company.

With AI, the same prep work takes 5 minutes per account. Here's a prompt that actually works:

Good prompt — research compression:

"You're helping me prep for outreach to [Company Name], a [industry] company of about [size]. I'm going to paste in their most recent 10-K excerpts and last earnings call transcript. Please pull out:

  1. The three biggest strategic priorities the CEO mentioned
  2. Any specific operational pain points (hiring, churn, supply chain, integration) the CFO flagged
  3. Recent leadership changes or org moves
  4. Any new product, market, or geography they're expanding into

Format as bullet points. Don't speculate. If something isn't in the source, don't include it."

The "don't speculate" line matters. Without it the model will happily invent a strategic priority that sounds plausible and isn't real. I've seen reps reference fabricated initiatives in cold emails because they trusted the AI summary instead of checking the source.

The output of this prompt becomes the seed material for your outreach. You're not asking AI to write the email. You're asking it to read 80 pages so you don't have to, then giving you the three or four facts that might actually matter to your prospect.

2. AI-Assisted Personalization (Not AI-Written Emails)

Here's the line: AI drafts the bones. The rep writes the hook.

When AI writes the whole email, it always (always) defaults to a structure that sounds like a slightly more articulate version of every other AI-written email. Compliment, observation, transition, ask. Buyers see this pattern 30 times a week. They've trained themselves to delete it on sight.

A better workflow: ask AI for angle options, not finished copy.

Good prompt — angle generation:

"Here's the prospect: [Title, Company, brief context]. Here's the research summary: [paste from earlier prompt]. Here's the problem my product solves: [one sentence].

Give me 3 different angles I could open a cold email with. Each angle should be a single sentence connecting something specific from the research to the problem we solve. Don't write the email. Don't include a CTA. Just give me three different opening hooks I could build from."

You then pick the angle that resonates, and you write the opener yourself. In your voice. With the awkwardness, the contractions, the small references that prove a human read this, not just a model.

Here's what NOT to do:

Bad prompt — full delegation:

"Write a cold email to [Title] at [Company] about our sales engagement platform. Make it personalized, friendly, and end with a clear CTA for a 15-minute meeting."

That prompt produces an email that's polished, generic, and instantly skippable. It's the email version of a stock photo. Looks fine, lands nowhere.

I tried both approaches across about 200 emails over two weeks. Angle-generation prompts (where I wrote the opener) replied at roughly 5%. Full-delegation prompts replied at 0.8%. The full-delegation emails were measurably "better written" by any objective standard. They just got ignored.

For more on the cadence side of this (when and how often these emails should land), see Cold Email Cadences That Actually Work in 2026.

3. Conversation Prep With AI Roleplay

This one is underused, and it's free.

Before a discovery call with a senior buyer, you can rehearse the conversation with AI playing the role of the prospect. Done well, this is the closest thing to live practice you can get without burning a real meeting.

Good prompt — discovery roleplay:

"You're playing a skeptical CFO at a 2,000-person logistics company. You've been pitched on sales tech twice this quarter and both times the ROI didn't materialize. You're polite but tired. You'll give the rep three questions, then if their answers don't make sense you'll politely end the call.

I'm the rep. I sell [your product, one sentence]. The meeting is starting now. Open with whatever a CFO in your position would actually say to a BDR they barely remember booking with."

Then you run the call. The model will push back. It'll ask you what your last three customers in their industry achieved. It'll ask why you're talking to the CFO and not the VP of Sales. It'll politely note that you haven't actually answered the question you were just asked.

After 15 minutes of this, walk into the real call and watch how much sharper your answers get. The roleplay isn't a substitute for the meeting. It's a substitute for the bad first version of your answers, which the prospect now doesn't have to sit through.

A variation: roleplay a buyer who's already evaluating a competitor. Tell the model who the competitor is and what they're known for. Then practice the conversation about why a buyer should still take a second meeting.

4. Follow-Up and Recap

This is the lowest-risk, highest-leverage AI use in the whole funnel. The prospect already knows you. You've already had a conversation. You're just trying to get the recap and next-step right.

Good prompt — post-call recap:

"Here are my notes from a 30-minute discovery call with [Title] at [Company]: [paste your raw notes].

Please draft:

  1. A 4-line recap email summarizing the three things they cared about most, what we agreed I'd send them, and a proposed next meeting date.
  2. A separate, slightly longer internal note for my CRM with the call's key qualifying signals (budget, timeline, decision process, blockers) and a recommended next-step.

Use my voice from the notes. Don't add anything I didn't say."

The "don't add anything I didn't say" line is doing real work there. Without it, the model will helpfully invent context ("as we discussed, your team is targeting Q3") that you actually didn't discuss, and your prospect will quietly lose trust the next time they re-read the email.

Edit the recap before sending. Always. But the draft saves 10-15 minutes per call, and across a week of demos that's hours back.

5. Account-Level Synthesis

When you're working a target account list and your AE asks "what's the read on Acme Logistics this quarter?" you used to have to dig through your CRM, Slack, and call notes for 20 minutes to assemble an answer. AI can compress that.

Good prompt — account synthesis:

"I'm going to paste in everything we have on [Account]: my last six emails with their team, the notes from two discovery calls, and the most recent activity in our CRM. Please summarize:

  1. Where we are in the sales process (one sentence)
  2. The two strongest buying signals
  3. The two biggest concerns or blockers
  4. Any commitments I made that I haven't followed through on yet
  5. The single best next action

If something is unclear or missing, say so. Don't fill gaps with assumptions."

That last line, again, is what separates a useful summary from a misleading one.

What AI Shouldn't Do

This is the part most "AI for sales" content skips. Here's the honest version.

AI should not replace discovery questions on a live call. If you're using AI to generate questions to ask a prospect in real time, you've outsourced the most important thinking in the job. Discovery is where you decide if this deal is real. That's a human read on a human reaction. AI doesn't catch a wince. It doesn't notice when someone's voice gets quieter on a question about budget. It doesn't know that the third "yeah, totally" was the one that meant "no."

AI should not judge whether a persona is actually a fit. If your ICP says "Director of Operations at logistics companies, 500-2,000 employees, no current automation tooling," that's a judgment call about strategic fit. AI will gladly apply your filters mechanically and serve you a list. It cannot tell you that two of those companies just got acquired and are in a buying freeze, or that one of them is famously hostile to your category. Persona judgment lives with the rep and the AE.

AI should not decide who to call next. Some sales engagement tools now offer "AI-prioritized" call lists. Use these as suggestions, not commands. The reason: the model is optimizing for whatever signal it has, which is usually engagement data. The accounts that aren't engaging yet might be the ones where a sharp human call breaks through. If you let AI pick your list every day, you'll spend a quarter calling only the accounts that already half-want to talk to you, and you'll miss everything else.

AI should not write the opener of a first-touch email. I covered this above but it's worth repeating. The first sentence is where you signal you're a real human. Hand that sentence to AI and you signal the opposite.

AI should not handle anything where tone signals trust. Anything emotional, anything political inside a deal, anything where the prospect needs to feel that a specific person is paying attention to their specific problem: keep that manual. The rule of thumb: if a misread tone could kill the deal, the words should come from your fingers.

Prompts to Never Use

A few categories that are genuinely lazy and produce slop:

Bad — vague delegation: "Write a cold email to a marketing leader."

Output: pure stock prose. The model has no specific person, no problem, no angle. It defaults to averages and serves you the average of every cold email it's ever seen.

Bad — flattery as personalization: "Write a cold email that opens with a compliment about [Company]'s recent press."

Output: an email that opens with a compliment so generic it could apply to any company that ever announced anything. Buyers parse "saw your recent announcement, congrats" as the AI tell that it is.

Bad — feature dumping: "Write a cold email that explains all the features of our product and asks for a meeting."

Output: a 200-word feature list that no one reads.

Bad — emotional manipulation prompts: "Write a cold email designed to create urgency and FOMO."

Output: cringe. Real urgency comes from a real situation in the buyer's business, not from copy tricks.

If your prompt doesn't include a specific person, a specific situation, and a specific angle, the AI doesn't have what it needs to produce a useful output. It will produce something anyway. That something will sound like everyone else.

The AI-Output Review Checklist

Before any AI-touched message goes out, run it through this. Takes 30 seconds. Saves your reply rate.

  • Does this sound like me? Read it out loud. If you wouldn't actually say these words in this order, rewrite the parts that aren't you.
  • Is every claim true? Go through every factual statement. Did the company actually announce that? Did the CEO actually say that? If you can't verify it in 60 seconds, cut it.
  • Would I send this to my best friend's company? This is the gut-check. If the email would embarrass you in front of someone who knows your work, it's not ready.
  • Does the opener earn the email? First sentence has to do real work. If it's a generic compliment or a "saw your post" line, replace it.
  • Is there one specific reason this email is going to this person today? If you can't articulate that reason in one sentence, the email shouldn't go.
  • Did I edit at least 60% of what AI gave me? If you're sending AI output more or less untouched, you're delegating, not directing. You'll see it in your reply rate within two weeks.

Measuring Whether AI Is Actually Helping

A few metrics worth tracking before and after you adopt AI in your workflow:

  • Research time per account. Target: 25 minutes pre-AI down to 5 minutes post-AI. If you're not seeing that compression, your prompts aren't tight enough.
  • Reply rate to first-touch emails. Target: same or higher. If reply rate is dropping after AI adoption, you're sending more polished generic copy. Pull back.
  • % of AI output edited before sending. Target: 60%+ edits. Less than that means you're shipping the model's voice, not yours. Anyone who sends untouched AI output for two weeks straight will see their reply rate decay. It's that consistent.
  • Hours per week saved. Track it for two weeks. If you're not getting at least 3-4 hours back, your AI workflow has too much friction. Simplify.
  • Pipeline created per rep, quarter over quarter. This is the only metric that matters in the end. AI should move this number up. If it's flat or down while you're using AI more, the activity is producing volume without quality.

For more on common patterns that quietly tank performance, Common BDR/SDR Pitfalls and How to Avoid Them is the companion read.

How Rework Supports AI-Assisted Prospecting

Most AI prospecting failures aren't about the AI. They're about what happens to AI output between generation and send. Notes get lost in chat threads. Research summaries live in five different docs. Recap drafts get pasted into emails without ever being saved against the account, so the next rep who works it has no context. Rework CRM gives BDRs and SDRs one surface where AI-generated research, call recaps, and follow-up drafts attach directly to the account record. Your prep prompt output lives on the account. Your roleplay notes live on the account. Your AI-drafted recap lands as a draft email tied to the contact. Nothing falls through. Pricing starts at $12/user/month.

The Bottom Line

AI compresses prep, not judgment.

The best BDRs in 2026 use AI to read 80 pages so they don't have to, then spend the saved time thinking harder about which 30 accounts deserve real attention this quarter. The worst BDRs use AI to write more bad emails faster. The first group's pipeline is up. The second group is wondering why their reply rate is collapsing.

Stay in the loop on every send. Edit ruthlessly. Keep judgment manual. Use AI for the parts of the job that genuinely don't require you, and keep your hands on the parts that do.

If AI is doing the thinking, the buyer can tell.

Learn More