Revenue Operations Insights
Attribution Is Broken. Here's What to Measure Instead
B2B attribution has been "almost solved" for a decade. First-touch, last-touch, multi-touch, data-driven — each model arrived promising an end to the marketing-sales blame game. None delivered. Attribution spending platforms proliferated. Board decks filled with channel contribution percentages. Marketing and RevOps built elaborate attribution taxonomies. And yet every RevOps leader knows the dirty truth: the numbers don't add up, the models contradict each other, and everyone ends up arguing about methodology instead of making investment decisions.
The problem isn't which attribution model you choose. It's that attribution asks the wrong question entirely. "Which touchpoint caused this deal?" is an almost unanswerable question in B2B buying. The right question is "which investments should we make more of?" And attribution data is a poor tool for answering it.
Why Attribution Will Always Be Partially Wrong in B2B
The structural problems aren't fixable. They're features of how B2B buying actually works.
Multi-stakeholder buying cycles. Enterprise deals involve 6–10 people across functions. Each stakeholder has their own touchpoint history. Your CRM captures the contacts you know about and have records for. It misses the CTO who read your blog once and told the VP of Engineering to evaluate you. It misses the CFO who talked to their golf partner who uses your product. Those invisible influences shaped the buying decision. No attribution model captures them. Gartner's research on B2B buying groups puts the typical enterprise buying group at 6–10 stakeholders, with the majority of buying time spent in independent research that generates no trackable touchpoints.
Dark social and offline influence. In 2025, a significant portion of B2B research happens in Slack communities, private LinkedIn groups, peer conversations, and podcast episodes that generate no trackable click. The prospect who signed your $200k contract may have been influenced primarily by a Slack recommendation from someone in their network. Your attribution model recorded the last Google ad they clicked before filling out the demo form. Those two things are not the same. This is partly why understanding how modern B2B buying committees actually form matters for any honest measurement conversation — decisions rarely trace back to a single trackable moment.
Long sales cycles with attribution decay. An 18-month enterprise deal accumulates touchpoints across multiple fiscal quarters. Cookie consent frameworks and ITP browser tracking limit how far back your tracking reaches. The touchpoints from months 1–6 of a deal cycle are often invisible to attribution models that rely on pixel tracking, even with good UTM hygiene.
Channel interaction effects. Paid search performs better when content marketing is running. Events drive organic searches that look unattributed. Sales outreach converts at higher rates when prospects have already seen LinkedIn ads. These interaction effects mean that isolating the contribution of any single channel produces a false reading. Attribution models assume channels work independently. They don't.
Acknowledging these limits isn't defeatism. It's the starting point for building measurement that actually helps.
The Three Things Companies Do When Attribution Fails
When attribution data is unreliable, organizations tend to fall into one of three failure modes.
Over-investment in last-touch channels. Last-touch attribution systematically overstates the contribution of high-intent, low-funnel channels like branded search and direct traffic. These channels convert prospects who were already in the buying process — they didn't generate the intent, they captured it. When attribution data shows "70% of revenue from paid search," budget shifts toward paid search. Content marketing and brand awareness investments get cut because they show up as low-attribution channels. The machine optimizes for closing intent it didn't create, and awareness-building eventually dries up.
Gut-feel override. The marketing team looks at attribution data, doesn't trust it, and makes budget decisions based on intuition anyway. Except now they have to maintain the attribution system and report on it because the board asked for it, while privately making decisions based on what the CMO believes. The measurement overhead exists without the decision benefit.
Attribution theater for board slides. RevOps builds a beautiful multi-touch attribution dashboard. Marketing presents the numbers to the board with a confident narrative. The board asks "so what should we spend more on?" and the answer is essentially "more of everything that shows positive attribution," which is useless and everyone knows it. The theater exists to appear data-driven without actually being data-driven.
These failure modes aren't signs that marketing or RevOps is incompetent. They're predictable responses to a measurement framework that doesn't fit the thing it's trying to measure.
What to Measure Instead
The alternative isn't abandoning measurement. It's measuring things that more reliably answer "where should we invest?"
Pipeline source cohorts. Instead of asking "what touched this deal?", ask "where did this pipeline cohort come from, and how did it perform?" A cohort is a group of deals that originated from a specific source channel in a specific time period. Track win rate, average sales cycle length, and ACV for each source cohort. This approach doesn't require attributing credit across touchpoints. It requires cleanly tracking where deals entered the pipeline and how they closed. Consistent sales qualification frameworks are what make source cohorts comparable — if qualification criteria differ by rep or team, cohort win rates reflect process variation, not channel quality.
This is also structurally more honest. If inbound content-sourced deals in Q2 closed at 28% win rate with an average ACV of $42k, and outbound SDR-sourced deals closed at 18% win rate with $31k ACV, you have a defensible investment signal. The content investment is generating higher-quality pipeline. You can argue for maintaining it without resolving the attribution question.
Channel-influenced NRR. Which acquisition channels produce customers who expand? The deals that show up well in attribution aren't necessarily the ones who generate NRR above 110%. Track expansion rates and churn rates by original acquisition source. If your PLG trial cohort expands at 130% NRR and your outbound-sourced cohort expands at 95%, that's a different ROI picture than CAC alone suggests.
Rep-sourced vs. marketing-sourced win rates. Sales and marketing argue about attribution because they're competing for credit. A more useful frame is understanding the conversion difference between rep-sourced pipeline (cold outbound, referrals, and networking) and marketing-sourced pipeline (inbound, events, content). If the win rates are similar, you have two working engines. If one is dramatically outperforming, that changes headcount decisions more than attribution percentages do. McKinsey research on B2B marketing ROI measurement notes that companies relying on channel-level attribution consistently misprice their acquisition engines relative to companies tracking pipeline cohort quality.
Time-to-close by source. Some sources generate faster-moving deals even at similar quality. Referral-sourced deals often close 30–40% faster than cold outbound. If your sales team is capacity-constrained, sourcing decisions that reduce cycle length create more revenue per rep without adding headcount. Attribution models don't capture this. Cohort analysis does.
The Executive Alignment Problem
The reason attribution debates persist inside companies isn't analytical — it's political.
RevOps and marketing fight over attribution data because attribution determines budget and headcount. If content marketing is credited with 40% of revenue, the content team gets more budget. If attribution shows paid search drove it, the demand gen team wins the budget cycle. Both teams have incentive to advocate for the model that shows their channel most favorably.
The actual problem is that Sales, Marketing, and RevOps are operating as if they have separate revenue ownership rather than shared pipeline ownership. Attribution is a symptom of that misalignment. You're fighting about credit distribution because you haven't agreed on joint accountability for pipeline generation. Marketing and sales alignment around shared pipeline metrics is the organizational precondition for any measurement approach to work.
The executive conversation worth having isn't "which attribution model should we use?" It's "what does it take for Marketing and Sales to co-own a pipeline number?" When Marketing's success metric is pipeline generated (not MQLs) and Sales is accountable for sourcing a portion of their own pipeline, the attribution debate becomes less consequential. Both functions are on the hook for the same outcome.
RevOps can catalyze this by building reporting that shows pipeline contribution by function rather than by channel. "Marketing originated 55% of this quarter's pipeline, Sales originated 45%" is a cleaner accountability frame than multi-touch attribution percentages.
The Revenue Signal Portfolio
Attribution asks for a single source of truth. A better approach is a portfolio of signals, some quantitative and some qualitative, that collectively build a defensible investment view.
The Revenue Signal Portfolio framework has three layers:
Quantitative signals (backward-looking): Cohort win rates by source channel, cohort ACV by source channel, cohort NRR by source channel, time-to-close by source channel. These are calculated quarterly and trended over 4–6 quarters. They answer "what has been working?" They're reliable because they don't require attributing credit — just tracking deal origin and outcome.
Quantitative signals (forward-looking): Spend-per-pipeline-dollar by channel, pipeline velocity by source, cost per qualified opportunity. These answer "how efficiently are we generating pipeline right now?" They're less reliable than outcome-based metrics but necessary for in-flight budget decisions.
Qualitative signals (rep-reported influence): A simple quarterly survey asking reps to identify which marketing or brand touchpoints prospects mentioned unprompted in discovery calls. This captures dark social and offline influence in a structured way. It won't be statistically rigorous. But if 40% of reps report that prospects mention your podcast in discovery calls and your podcast doesn't show up in attribution data, that's a meaningful signal that the attribution model is missing something important.
Hard cohort data plus rep-reported influence gives you a more honest picture than either alone. You're not pretending to solve attribution. You're building a portfolio of evidence that allows defensible investment decisions.
Three Metrics to Bring to a Board Marketing Review Instead
If you're presenting marketing performance to a board or exec team and you're tired of defending attribution percentages, here are three alternatives that create better conversations.
Pipeline quality by cohort source: "Marketing-sourced pipeline in Q1 closed at 24% win rate with $38k average ACV. That's up from 19% win rate in Q3 of last year." This is a trend statement about marketing efficiency that the board can evaluate without understanding attribution models.
Channel CAC with payback: "Our webinar program generated 47 qualified opportunities last quarter at $1,800 cost per opportunity. Based on cohort win rates, that translates to roughly $6,400 blended CAC for webinar-sourced customers, with 11-month payback at current gross margin." This connects marketing spend directly to the CAC payback number that growth-stage investors care about. Building this view requires a SaaS metrics dashboard that surfaces cohort-level data alongside aggregate KPIs.
Rep-mentioned influence index: "In our Q1 rep survey, 38% of reps reported that prospects mentioned our content or community unprompted. That's up from 22% in Q3. We believe this reflects growing brand presence in the segment we're targeting, though it's not captured in attribution reporting." This builds credibility by acknowledging attribution limits while offering a qualitative signal that the board can anchor to. The concept of measuring influence through rep-reported signals rather than pixel tracking has been popularized by practitioners like Lenny Rachitsky — his analysis of product-led and brand-led growth mechanisms documents why untracked influence often explains more conversion variance than any attribution model will capture.
None of these metrics pretend attribution is solved. All three give the board something more useful than a pie chart of attribution credit.
Learn More
- CAC Payback: The Metric That Actually Predicts SaaS Survival — How to build investment decisions on cohort-based metrics rather than attribution percentages
- Pipeline Hygiene as a Cultural Practice, Not a Data Problem — Pipeline source data is only trustworthy if your CRM hygiene holds up
- The RevOps Maturity Model: From Reactive to Strategic — Strategic RevOps functions shape how attribution debates are resolved at the executive level
