More in
B2B SaaS Industry News
Salesforce Just Solved CRM's Oldest Data Problem — At a Cost
Apr 7, 2026
When Your SaaS Vendor Cuts Its R&D Team: What Enterprise Buyers Need to Watch
Mar 19, 2026
The 171% Agentic AI ROI Number — What the Data Actually Says
Mar 2, 2026 · Currently reading
Drift Is Being Phased Out: The RevOps Checklist Before Your Conversational Workflows Go Dark
Feb 11, 2026
The $99-Per-Seat Decision: How to Evaluate Microsoft's New AI Bundle
Feb 9, 2026
The SaaS Growth Model That No Longer Requires More Engineers
Feb 4, 2026
Meeting Notes Just Hit a $1.5B Valuation — What That Signals About the Enterprise AI Context Race
Feb 3, 2026
A $1.2B CRM-Killer at $8M ARR: What Every Sales Leader Needs to Decide Before the Next Board Meeting
Feb 2, 2026
Record AI Funding in Q1 2026: What It Means for Your Sales Stack and Your Competitors
Jan 8, 2026
The Agentic Sales Stack Is Attracting Unicorn Capital — What CEOs Need to Decide About Their Revenue Infrastructure
Jan 7, 2026
The 171% Agentic AI ROI Number — What the Data Actually Says

You've probably seen this number. It's showing up in vendor decks, analyst summaries, and LinkedIn posts: enterprise agentic AI deployments deliver an average ROI of 171%. U.S. companies, some versions of the claim say, average 192%.
The number isn't fabricated. But how you read it matters a lot — and most of the people using it in sales presentations are reading it in ways that should make you skeptical.
According to Multimodal.dev's aggregation of enterprise survey data, the 171% figure represents what organizations deploying agentic AI systems report as their returns. The framing sounds clean. The reality is messier.
Where the Number Comes From
Multimodal.dev is an AI statistics aggregator, not a primary research institution. Their 171% figure pulls from enterprise surveys across multiple sources — organizations that have deployed agentic systems and self-reported their returns.
That sourcing structure creates at least three reliability problems:
Self-reported data skews toward success. Companies that deployed agentic AI and saw disappointing returns are less likely to participate in surveys about agentic AI ROI, and less likely to share results internally when they do. The population reporting 171%+ returns is not a random sample of all enterprise AI deployments — it's a sample weighted toward organizations motivated to talk about AI.
"Average" obscures the distribution. Multimodal.dev's own commentary notes that most enterprises see 10-15% productivity gains as typical outcomes, with 171%+ representing an outlier cohort. The math works out: if a small group of organizations sees 400-500% returns and a large group sees 20-30%, the average can easily land at 171% while the median is far lower. You're not buying a lottery ticket — you're allocating a budget line — and median outcomes matter more than averages for that decision.
Projection vs. post-deployment audited results. In some interpretations, the 171% figure reflects projected or expected returns, not post-deployment audited numbers. Organizations that committed to agentic AI investment often built internal business cases predicting high returns. Whether those projections were later validated by actual results is a different question, and the survey methodology doesn't always distinguish between the two.
The Deloitte 2026 State of AI report, which surveyed 3,235 senior leaders across 24 countries, offers a useful calibration. Deloitte found that 66% of respondents reported productivity and efficiency gains from enterprise AI — but only 25% described the impact as "transformative." That's a meaningful gap. Two-thirds are seeing improvement; one-quarter are seeing the kind of step-change that produces outsized ROI. Those are not the same thing.
What Actually Produces Outsized Returns
Forrester and Sprinklr TEI research provides a more constrained but honest picture: some AI programs do deliver ROI in the 200%+ range, but only when deployed "end-to-end across workflows" rather than as isolated tools. The deployment model matters as much as the technology.
This isn't surprising if you think about how productivity improvements compound. An agent that handles one task in one workflow saves some time. An agent ecosystem that handles multiple connected tasks across a full workflow — from lead qualification to pipeline update to forecast rollup — removes friction at every step. The compounding effect is where the high-return cohort lives.
The organizations seeing transformative returns from agentic AI share a few common characteristics based on the available data:
- They had strong underlying data quality before deploying agents (garbage-in, garbage-out applies here more than anywhere)
- They redesigned processes around the AI, not alongside it — Deloitte found only 30% of organizations are doing this
- They deployed across full workflows, not as point solutions
- They had governance frameworks in place before deployment — only 21% of leaders in the Deloitte survey reported mature agent governance
If your organization doesn't check most of those boxes, the 171% average is not your baseline expectation.
Why the Average Is the Wrong Number to Target
Here's the practical problem with building a business case around "average ROI": it's the wrong question to ask.
The average includes organizations with 10x the AI maturity, 10x the data readiness, and deployment models you may not be positioned to replicate in the next 12 months. Benchmarking against the average without accounting for your organization's specific starting point is like a first-year runner targeting a marathon average finish time — the average is real, but it describes a population you're not currently part of.
The more useful question is: What does the 50th-percentile outcome look like for an organization at our level of data readiness, governance maturity, and AI deployment experience?
And then: What specific investments would move us toward the upper half of that distribution?
That framing leads to concrete decisions. The benchmark framing leads to optimism that may not survive contact with implementation.
For context on what governance maturity actually requires before agents go into production, the RevOps governance checklist for AI agents covers the operational specifics. The gap between "we have agents" and "we have agents running inside a governance framework" is where most organizations are losing value right now, according to the Deloitte data.
A 4-Step Framework for Estimating Your Expected ROI
Instead of benchmarking against a headline number, here's a more grounded way to think about what you should actually expect:
Step 1: Audit your data quality baseline. Agentic systems are only as reliable as the data they consume. Before projecting ROI from AI agents in your revenue workflows, run an honest assessment: what percentage of your CRM records have complete, current data? What's your forecast accuracy today, before any AI involvement? If your data quality is low, your AI ROI ceiling is also low — regardless of the platform you deploy.
Step 2: Identify your end-to-end deployment candidates. High-return deployments happen across full workflows, not isolated tasks. Map two or three revenue or operational workflows end-to-end and identify where agents could remove friction at multiple steps. The workflow that spans lead qualification, initial outreach, pipeline update, and forecast input is more valuable than three separate point solutions that each handle one of those steps.
Step 3: Calculate your realistic productivity baseline. The 10-15% productivity gain figure Multimodal.dev describes as typical for most enterprises is a reasonable starting assumption for your first deployment. Model what 10% productivity improvement in your highest-leverage workflows is actually worth in dollars, headcount capacity, or cycle time. That's your conservative case — not the 171% headline.
Step 4: Define your path to the upper quartile. The organizations achieving transformative returns share identifiable characteristics: end-to-end deployment, process redesign, data quality investment, governance maturity. For each gap between your current state and those characteristics, assign a cost and a timeline. If you can close the gaps, model the ROI of doing so. If you can't close them in the relevant time horizon, the upper-quartile outcome isn't available to you yet — and your business case should reflect that honestly.
This framework won't produce a number as clean as 171%. But it'll produce a number you can actually defend to your board and act on with your team.
What to Do This Week
The 171% ROI figure will keep circulating. Vendors will keep using it. Board members will keep asking about it. Your job as CEO is to have a grounded response — one that neither dismisses the opportunity nor overstates what your organization should expect.
This week:
- If you have an AI investment proposal or business case in front of you that cites the 171% figure, ask the presenting team: what's our data quality score today, and what's the methodology for this ROI projection? The answers will tell you a lot.
- Pull the Deloitte 2026 State of AI report summary (publicly available). The 25% "transformative impact" finding is the most honest single data point for where your peer group currently sits.
- Ask your Head of Operations or RevOps: what percentage of our AI-adjacent processes have been actually redesigned around AI, versus just augmented by it? The Deloitte data suggests this is the biggest predictor of whether you're in the transformative camp or the "some productivity gains" camp.
- Set an internal expectation that your first agentic deployment should target a 15-25% efficiency improvement in a specific, measurable workflow — not 171% across the business. Hit that target, build the governance muscle, then expand.
The data on agentic AI ROI is real. The question is whether it describes your organization's situation or someone else's. Right now, for most companies, it describes someone else's. The goal is to change that — deliberately, not by assuming the average applies.
Primary source: Multimodal.dev — Agentic AI Statistics. Supporting data: Deloitte 2026 State of AI.
