MQL Definition Framework: How to Write an MQL Definition Both Teams Will Actually Respect

Every quarter, the same meeting happens. Marketing reports that they hit MQL targets. Sales reports that pipeline is thin. Both numbers are accurate. Neither team is lying.
The problem is the MQL definition. If it's loose (a single form fill, a score anyone can hit by reading two blog posts), marketing will hit their number while generating leads sales can't close. If it's unrealistic, requiring four high-intent actions and perfect firmographic fit, marketing will never hit pipeline goals and will quietly start gaming the criteria. For the authoritative definitions of MQL, SAL, SQL, and related funnel terms, see the marketing-sales alignment glossary.
A bad MQL definition is a silent tax on pipeline quality. You don't see the cost in the MQL count. You see it six weeks later in low SAL rates, high rejection counts, and reps who stop working the queue because they've learned not to trust it.
Why MQL Definitions Drift
Most MQL definitions start with good intentions and erode over time. The drift follows a predictable pattern.
Marketing writes the definition alone, typically during planning, without pulling closed-won data from sales. The criteria reflect what marketing can measure (form fills, content downloads, email opens) rather than what actually predicts a sales-ready lead.
The definition never gets validated against closed-won history. Nobody runs a backtest to ask: "Of the leads that closed last year, how many would have qualified as MQL under this definition at the time?" If the answer is 30%, the definition is too loose. If the answer is 95%, it might be too tight or your sample is biased toward late-funnel captures.
Then a bad quarter happens. Pipeline misses. Leadership pressure builds. Someone adjusts the MQL threshold downward, dropping the score requirement or removing a criterion, to hit a short-term volume target. The definition changes unilaterally, without a joint review, without updating the criteria in the scoring model documentation.
Twelve months later, nobody knows what the current MQL definition actually is, and both teams are working from different assumptions. The next section shows how to fix that with a three-component framework that holds up under pressure.
Key Facts: MQL Quality and Rejection Rates
- The average B2B MQL rejection rate (MQLs that sales declines to pursue) is 40-70%, according to SiriusDecisions research spanning thousands of B2B companies. Most companies are spending marketing budget on leads sales won't touch.
- Companies with jointly defined MQL criteria see 26% better pipeline-to-revenue conversion than those where marketing defines MQL unilaterally, per Forrester's Revenue Operations research.
- Only 27% of B2B companies have a formally documented, jointly agreed MQL definition, per SiriusDecisions' Demand Waterfall benchmark research.
- 79% of marketing leads never convert to sales, with "poor lead quality" and "undefined handoff criteria" cited as the top two reasons, per MarketingProfs.
- When companies conduct a quarterly MQL definition review, they reduce their average rejection rate by 18% within two quarters, per research published by TOPO (now Gartner).
The average B2B MQL rejection rate is 40-70%, meaning most marketing teams are spending budget generating leads sales won't pursue, primarily because the MQL definition was written by marketing alone without closed-won validation, per SiriusDecisions research spanning thousands of B2B companies.
Companies with jointly defined MQL criteria see 26% better pipeline-to-revenue conversion than those where marketing defines MQL unilaterally, per Forrester's Revenue Operations research. The difference is not the quality of the definition. It's the joint ownership. When both teams help write the criteria, both teams defend them under pressure rather than gaming them.
Revenue teams that conduct quarterly MQL definition reviews reduce their average rejection rate by 18% within two quarters, per TOPO (now Gartner). A quarterly review is not a full redefinition. It's a 30-minute check of SAL rate and rejection reason patterns against the current criteria.
The Three Components of a Durable MQL Definition
A durable MQL definition, one that both teams will defend rather than undermine, has three components that must all be present.
The Three-Component MQL Model defines what a sales-ready marketing lead must satisfy before entering the sales queue. The three components are: Firmographic Fit (does this company match the ICP?), Behavioral Signal Threshold (has this contact demonstrated genuine purchase intent, not just curiosity?), and Timing / Urgency Indicator (is there a signal suggesting now is the right time to pursue rather than nurture?). All three must be present. A lead with high behavioral engagement at the wrong company type is not an MQL. A perfect-fit company with no behavioral signal is not an MQL. A lead with both fit and signal but whose engagement happened three months ago needs recency validation before routing to sales.
Component 1: Firmographic Fit
Does this lead's company match the ICP? Firmographic fit is the gate, not the qualifier. If the company doesn't fit the ICP, high behavioral engagement doesn't make the lead sales-ready. It makes it an engaged lead in the wrong market.
Firmographic fit criteria should be binary where possible: company size above the minimum threshold, industry in the target verticals, geography in the target markets, business model that matches the product's use case. Pass/fail, not scored. Gartner's ICP framework describes this as the "environmental and firmographic attributes" gate, accounts that don't clear it shouldn't enter the qualification pipeline regardless of behavioral signals. A company that's out of ICP on firmographic criteria shouldn't become an MQL regardless of engagement score.
Be specific with the criteria. "Mid-market B2B" is not a definition. "US-based B2B companies, 50-500 employees, with an existing CRM, SaaS or professional services business model" is a definition. Write the actual ranges. Both teams need to be able to apply the same filter independently and get the same result. The shared ICP framework shows how to build that filter jointly from both teams' data.
Component 2: Behavioral Signal Threshold
Has this lead demonstrated enough engagement to indicate genuine interest, not just content consumption? The behavioral threshold distinguishes active interest from passive browsing.
Weak behavioral signals: opened one email, downloaded one top-of-funnel piece of content, visited the homepage. Any random visitor or curious researcher generates these signals without any purchase intent.
Strong behavioral signals: visited the pricing page, watched a product demo video, engaged with implementation or integration content, used a free trial or product calculator, attended a webinar and submitted a question, requested a direct contact.
The threshold should require a combination of signals, not a single action. "Score of 40 behavioral points or above, including at least one high-intent signal (pricing, demo, direct contact, or trial)" is more durable than "score above 30 points." The combination requirement prevents leads from gaming the threshold with low-intent engagement.
Your MAP's behavioral scoring model is only as good as the signals you've weighted. Make sure the highest-weighted signals are the ones your sales team has confirmed correlate with pipeline in closed-won data. See fit vs intent scoring for how to weight each signal type correctly.
Component 3: Timing / Urgency Indicator
Some context about why this lead should be pursued now rather than placed back in nurture. Timing context doesn't have to be explicit ("we have a Q3 deadline"), but something in the signal set should suggest the lead is in a reasonable buying window.
Timing indicators: requested a demo or contact (explicit intent), recently hired for a role that typically triggers this purchase (intent data signal), published a job description in a category adjacent to your product, attended a live event (lower latency than content downloads), engaged with competitive comparison content.
Timing indicators are the component most companies leave out of their MQL definition. Without it, you end up passing leads whose engagement happened weeks ago and whose intent signals have gone cold. Score decay (reducing scores over time when engagement stops) is part of the answer, but adding a recency requirement to the behavioral threshold is cleaner.
Writing the Definition Together: Joint Session Format
The MQL definition must be co-authored. Marketing alone writes definitions that maximize volume. Sales alone writes definitions that are impossible to hit. The right definition lives in the tension between both teams' interests, and it requires both in the room.
Who should attend:
- CMO or VP Marketing (or marketing director with ownership of demand gen)
- CRO or VP Sales (or sales director with visibility into lead quality)
- RevOps or Marketing Ops (whoever owns the scoring model and CRM configuration)
- An AE or SDR who works the MQL queue daily (they know what "good" actually looks like in practice)
What data to bring:
Pull closed-won ICP profile from CRM: what did the last 20 closed-won deals look like at the point of MQL creation? What were their scores? What signals triggered the MQL? Were there patterns in the signals that predicted conversion versus the MQLs that got rejected?
Pull the rejected MQL cohort from the last 90 days: what criteria did those leads meet that triggered MQL status? What criteria were they missing that caused sales to reject them? Rejection reason codes (if you've been tracking them) are gold here. If you haven't been tracking them yet, the lead rejection and recycling workflow explains how to set up the feedback loop.
Session structure:
Start with the rejected leads, not the definition document. Show the group the last 30 rejected MQLs and the rejection reasons. Ask sales to describe what was missing. That list of missing criteria is your definition gap.
Then look at the closed-won data and ask: if we'd applied the new criteria to these leads at the time of MQL creation, would they have qualified? If yes, the definition is workable. If 60% of your closed-won leads wouldn't have qualified under the new criteria, you've over-tightened and need to adjust.
Document every decision and the reasoning behind it. A definition document without reasoning is vulnerable to being overridden by whoever's most senior in a future meeting.
Common MQL Definition Mistakes
Single-action triggers. "Downloaded one ebook = MQL." This trigger conflates curiosity with intent. A competitor, a student, or a lead three years from buying can all download an ebook. Single-action triggers inflate MQL volume while destroying MQL credibility.
Firmographic-only definitions. "All contacts from companies that match our ICP = MQL." This ignores engagement entirely. A contact at a perfect-fit company who has shown zero interest isn't sales-ready. They're a cold prospect who happens to work at the right company.
Over-complicated formulas that nobody can explain. If the scoring model requires 15 criteria, three weighted behavioral multipliers, and a decay function that nobody on the team can summarize in one sentence, it won't be trusted. Sales will reject leads whose scores they can't understand, and marketing will optimize for score metrics nobody believes in.
Threshold set to hit pipeline targets rather than predict quality. When threshold changes are driven by quarterly pipeline pressure ("we need 20% more MQLs this quarter so let's lower the threshold from 70 to 55"), the definition loses its meaning. Threshold changes must be driven by quality data (backtest results, SAL rate analysis, rejection pattern review), not volume targets.
Documenting and Publishing the Definition
Once agreed, the definition needs to live in a place both teams can find and reference, not in the deck from the planning meeting.
Recommended format: a shared document (Google Doc, Notion page, or Confluence page) with three sections: the definition in plain English, the specific criteria mapped to your scoring model, and the version history including the date and reason for each change.
Who owns it: RevOps or whoever maintains the MAP configuration. This is the person who will implement scoring model changes and confirm the system matches the documented definition.
Where it lives: linked from the CRM (so reps can access it when reviewing a lead), linked from the MAP configuration (so marketing ops can reference it when adjusting the model), and included in new hire onboarding for both marketing and sales. The marketing-sales SLA template is the companion document that governs what sales does once they receive the MQL.
The plain-English version should be short enough that any rep or marketer can describe the MQL definition in under 30 seconds. If they can't, it's too complex to be operational.
Testing the Definition Against History
Before finalizing a new or revised MQL definition, backtest it against 6 months of historical data. The test answers two questions:
Coverage test: Of the leads that became closed-won deals in the last 6 months, what percentage would have qualified as MQL under the new definition? If the answer is below 70%, your definition will filter out too many good leads.
Precision test: Of all leads that would have qualified as MQL under the new definition in the last 6 months, what percentage became closed-won deals (or at least SALs)? If the precision rate is below 15%, your definition is still too loose.
The backtest doesn't need to be exhaustive. Pull 6 months of closed-won data (minimum 20 deals) and 6 months of rejected-lead data (minimum 40 rejections) and run both cohorts through the proposed criteria manually. Two hours of analysis prevents six months of misalignment.
Review Cadence
| Trigger | Review Type | Action |
|---|---|---|
| Quarterly check-in (default) | Lightweight review | Compare SAL rate and rejection rate to baseline. Adjust threshold if drifted more than 5 points from target. |
| MQL rejection rate spikes above 35% | Full review session | Re-run backtest. Identify which criteria are producing false positives. Tighten behavioral threshold or firmographic filter. |
| Post-campaign MQL spike | Spot review | Check whether the spike is quality volume or volume from a non-ICP source. Prevent temporary campaign surge from triggering false definition change. |
| New market or product line | Full joint session | Re-run the full definition process. Different products often have different ICPs and different conversion signals. |
| New CMO or CRO joins | Definition reconciliation | New leaders bring prior-company definitions. Align on current definition explicitly before they operate on different assumptions. |
When Marketing and Sales Can't Agree
Sometimes the joint session surfaces a genuine disagreement that a 90-minute meeting won't resolve. Here's how to handle it.
Escalation path: If the CMO and CRO can't agree on threshold, bring the data to the CEO or the RevOps leader. Present the backtest results. The decision should be data-driven, not hierarchy-driven. The lead response time benchmark is a useful reference point. A looser MQL threshold only helps if sales actually follows up before intent goes cold.
Interim proxy metrics: If you can't agree on the full definition, agree on one proxy metric to track for 90 days: SAL rate. Set a target SAL rate (70% is reasonable for most teams) and define that any definition change that drops SAL rate below target gets reversed. This creates a shared accountability mechanism even when teams disagree on inputs.
Pilot period: When teams disagree on threshold, run the proposed criteria as a parallel track for 60 days. Flag leads that would qualify under the new definition without routing them immediately. At day 60, review the pilot cohort: do they SAL and SQL at higher rates than the current MQL population? Use the data to end the debate.
Disagreement that can't be resolved is almost always a data problem. Someone doesn't trust the attribution model, or the CRM data isn't clean enough to analyze, or the teams are looking at different time windows. Fix the data issue before trying to resolve the definition argument. The Forrester B2B Revenue Waterfall offers one structural approach: shifting from lead-based to buying-group-based tracking, which sidesteps the individual MQL argument by measuring group-level engagement instead.
Sample MQL Definition Template
Here's a template your team can adapt. Fill in the italicized fields with your specific criteria.
MQL Definition v1.0: [Company Name] Last reviewed: [Date] | Owner: [RevOps/Marketing Ops name]
Firmographic criteria (must meet ALL):
- Company size: [minimum employee count] to [maximum employee count]
- Industry: [specific verticals]
- Geography: [target markets]
- Business model: [B2B SaaS / professional services / other]
Behavioral threshold (must meet ALL):
- Lead score of [score floor] or above
- At least one high-intent signal in the last [X days]: [list specific signals: pricing page, demo request, trial signup, etc.]
- No disqualifying signals present: [list any signals that override positive score: competitor domain, personal email, known wrong title, etc.]
Timing / recency:
- Engagement occurred within the last [X days] OR lead holds an active explicit request
Exclusion criteria:
- Company employee count below [minimum]
- Free email domain (gmail, yahoo, hotmail)
- Job title in exclusion list: [intern, student, competitor, etc.]
SAL rate target: [X%]. Review the definition if actual SAL rate drops below this threshold for two consecutive months.
For canonical definitions of MQL, SAL, SQL, and related terms, see the Marketing-Sales Alignment Glossary.
Rework Analysis: The Backtest as the Fastest Path to Agreement
In our experience with alignment sessions, the fastest way to break a definition impasse between marketing and sales is a backtesting exercise run before the meeting, not during it. Pull 6 months of closed-won deal data and 6 months of rejected MQL data. Apply the proposed definition to each cohort manually. If fewer than 70% of closed-won deals would have qualified under the new definition, the criteria are too tight. If the precision rate (closed-won as a percentage of all would-be MQLs) is below 15%, the definition is too loose. Presenting these two numbers at the start of the joint session converts a values debate ("we need more leads" vs. "we need better leads") into a data interpretation exercise. It's harder to defend a threshold that would have rejected 40% of last year's wins.
Frequently Asked Questions
What is an MQL and how is it different from a lead?
A Marketing Qualified Lead (MQL) is a contact that marketing has evaluated against agreed criteria, including firmographic fit, behavioral signal, and timing context, and determined is ready for sales follow-up. A raw lead is any contact who provided information or was identified as a potential buyer, with no qualification implied. The distinction matters because routing unqualified leads to sales wastes rep time and erodes trust in the marketing-generated queue.
What is the difference between an MQL and an SQL?
An MQL is a marketing judgment based on data signals that a contact is ready for sales outreach. An SQL (Sales Qualified Lead) is a determination made after a human qualification conversation that the contact meets your team's qualification criteria, typically budget, authority, need, and timeline. Every SQL starts as an MQL, but not every MQL becomes an SQL. The SAL-to-SQL conversion rate measures how many qualification conversations result in a confirmed sales opportunity.
How do you write an MQL definition that both marketing and sales will respect?
A jointly respected MQL definition requires joint authorship. The session should include marketing, sales leadership, RevOps or marketing ops, and at least one rep who works the queue daily. Start from rejected MQL data (what was missing?) and closed-won data (what did the wins have in common?). Document not just the criteria but the reasoning behind each one, so any future definition change has a burden of evidence rather than being decided by whoever is most senior in the room.
What is the right MQL rejection rate?
A rejection rate of 20-30% signals a healthy definition. Sales is reviewing leads carefully and rejecting the ones that don't qualify, but the majority are making it through. A rejection rate above 35-40% indicates the definition is too loose: marketing is generating leads that sales has learned not to trust. A rejection rate below 10% may indicate the definition is too tight or that reps are accepting leads to hit SAL targets without genuinely reviewing them. Track rejection rate monthly and treat sustained deviation from the target range as a trigger for a definition review.
What is the backtest method for validating an MQL definition?
A backtest applies the proposed MQL criteria retroactively to 6 months of historical data to check two ratios. The coverage test asks: what percentage of the last 6 months of closed-won deals would have qualified under the new definition? If the answer is below 70%, the definition is too restrictive. The precision test asks: of all leads that would have qualified as MQL, what percentage became closed-won (or at minimum a SAL)? If below 15%, the definition is too loose. Running both tests before finalizing the definition converts a debate about preference into a decision based on evidence.
How long should an MQL definition document be?
Short enough that any rep or marketer can recite the key criteria in under 30 seconds. The working document (with version history and reasoning) can be longer, but the operational definition, the criteria that get wired into the MAP scoring model, should fit on a single screen. A definition that requires a slide deck to explain is a definition that won't survive a leadership change.
When should a company change its MQL definition?
Trigger a review when: the MQL rejection rate climbs above 35% for two consecutive months; a new product line or market segment is added; a new CMO or CRO joins the team; or a quarterly pipeline review reveals that marketing-sourced pipeline is converting at a substantially lower rate than the previous period. Do not change the definition in response to short-term volume pressure (a bad quarter). Threshold changes driven by pipeline targets rather than quality data produce a definition that drifts progressively looser over time.
Learn More

Senior Operations & Growth Strategist
On this page
- Why MQL Definitions Drift
- The Three Components of a Durable MQL Definition
- Component 1: Firmographic Fit
- Component 2: Behavioral Signal Threshold
- Component 3: Timing / Urgency Indicator
- Writing the Definition Together: Joint Session Format
- Common MQL Definition Mistakes
- Documenting and Publishing the Definition
- Testing the Definition Against History
- Review Cadence
- When Marketing and Sales Can't Agree
- Sample MQL Definition Template
- Frequently Asked Questions
- What is an MQL and how is it different from a lead?
- What is the difference between an MQL and an SQL?
- How do you write an MQL definition that both marketing and sales will respect?
- What is the right MQL rejection rate?
- What is the backtest method for validating an MQL definition?
- How long should an MQL definition document be?
- When should a company change its MQL definition?
- Learn More