← Back to blog Data Quality

Why Do My Sales and Marketing Teams Have Different Numbers?

When sales says “$4.2M ARR” and finance says “$3.8M,” the problem isn’t the people. It’s the systems.

Why Your Team Can’t Agree on the Numbers (And How to Fix It)

It’s the quarterly business review. Marketing presents: “We generated 340 MQLs and sourced £2.1M in pipeline.“ Sales presents: “We received 180 qualified leads and have £1.4M in pipeline.“

The CEO looks at both slides. “Which one is right?“

Silence.

Both teams are looking at their own dashboards, built on their own definitions, pulling from their own tools. Neither is lying. But neither is telling the same story. And when leadership can’t trust the numbers, every decision becomes a debate.

Here’s what this costs you in practice: One team we worked with ran the same QBR with two sets of numbers for three consecutive quarters. Each review started with 30 minutes of “reconciliation“ — arguing about whose pipeline figure was correct — before any actual strategy discussion happened. That’s 6 hours of senior leadership time per year spent debating what’s true instead of deciding what to do. Worse, the CEO lost confidence in both teams’ numbers and started making budget decisions based on gut feel instead.

This is the data trust problem — and it’s one of the most common and most destructive issues in B2B revenue organisations.

How the Same Data Tells Different Stories

The Definition Problem

The root cause is almost never bad data. It’s different definitions.

Metric Marketing’s Definition Sales’ Definition The Gap
Lead Anyone who fills out a form or downloads content Someone who’s been qualified and is worth contacting Marketing counts 3x more “leads“
MQL Lead scoring threshold based on engagement + fit “A lead I’d actually call“ Marketing MQLs include contacts Sales would never touch
Pipeline Any opportunity sourced from a marketing touchpoint Deals with a confirmed next step and realistic close date Marketing counts pipeline Sales hasn’t validated
Conversion rate MQLs that become opportunities (any opportunity) Qualified leads that result in meetings Different numerators AND denominators
Revenue sourced Revenue from deals where marketing was first touch Revenue from deals where sales drove the close Both claim credit for the same deals

None of these definitions are wrong in isolation. They’re measuring different things. But when both teams present to the CEO using the same words for different concepts, the result is confusion and distrust.

The Tool Problem

It gets worse when each team lives in a different system:

MARKETING SALES

───────── ─────

HubSpot Marketing Hub Salesforce CRM

↓ ↓

Tracks: form fills, page views, Tracks: calls, meetings,

email opens, ad clicks, deal stages, close dates,

content downloads contract values

↓ ↓

Reports: MQLs, campaign ROI, Reports: pipeline value,

cost per lead, engagement win rate, average deal size,

quota attainment

These systems have different data models. Marketing tracks contacts and campaigns. Sales tracks accounts and opportunities. The join between them — which campaign influenced which deal — is often manual, incomplete, or built on assumptions.

The Timing Problem

Even when definitions align, timing creates discrepancies.

Marketing might count a lead as “generated“ when they first fill out a form in January. Sales might not create an opportunity until March. When Q1 reports are pulled:

Same customer. Three different quarters. Three different reports.

The Real Damage of Misaligned Numbers

This isn’t just an annoying reporting problem. It has direct revenue consequences:

Budget Decisions Made on Bad Data

If marketing can’t prove pipeline contribution using numbers sales trusts, marketing budgets get cut — even if marketing is driving real value. Or the opposite: marketing keeps spending on channels that generate “leads“ that never convert, because nobody has a shared view of lead-to-revenue conversion.

The Blame Cycle

When pipeline is short:

This cycle doesn’t just waste time — it erodes the cross-functional collaboration that revenue growth depends on.

Forecasting Errors

If the pipeline number is different depending on who pulls it, the forecast is unreliable. And an unreliable forecast leads to:

How to Fix It

Step 1: The Definition Workshop

Get marketing, sales, CS, and finance leaders in a room. Spend two hours aligning on definitions for your top 15–20 metrics.

The non-negotiable list:

Metric Agree On
Lead What constitutes a lead? Form fill? Demo request? Product sign-up?
MQL Exact scoring criteria: what fit and engagement thresholds?
SQL What does sales “acceptance“ mean? A meeting held? A qualification call completed?
Opportunity When is an opportunity created? What fields must be populated?
Pipeline Is it all open opportunities? Only those with activity in the last 30 days?
Sourced vs. influenced If marketing touches a deal that sales opened, who gets credit?
Won revenue Contract signed? First payment received? First month of service?
Churn When does a customer count as churned? Contract end? Non-renewal? Zero usage for X days?

The rule: each metric gets one definition, documented in one place, used by all teams. No exceptions.

Step 2: One Source, Not One Tool

You don’t need to force everyone into the same tool. You need to ensure all tools feed into a shared reporting layer where the agreed definitions are applied consistently.

This can be:

The key: when someone pulls a number from the shared layer, it should match regardless of who pulls it. Marketing’s pipeline number and sales’ pipeline number should be the same number — because it’s calculated using the same definition from the same data.

Step 3: Expose the Logic

One of the biggest trust killers is hidden logic. A dashboard that shows a number without showing how it was calculated breeds suspicion.

A RevOps leader we spoke with described exactly this problem: their team didn’t trust the internal analytics tool because the logic was hidden. Users couldn’t verify if the definitions were correct or if the data source was accurate.

The fix: make every metric definition visible in the tool.

Trust Killer Trust Builder
Dashboard shows “142 MQLs“ Dashboard shows “142 MQLs (Lead score > 50, created this month, excluding existing customers)“
Pipeline report shows “£3.2M“ Pipeline report shows “£3.2M (Open opps, stage >= Discovery, last activity within 30 days)“
Churn report shows “4.2%“ Churn report shows “4.2% (Accounts with ARR > £0 at period start that reached £0 ARR by period end)“

When people can see the definition, they can verify it. When they can verify it, they trust it. When they trust it, they use it. This is the virtuous cycle that replaces the blame cycle.

Step 4: Version Control Your Definitions

Definitions drift. Someone tweaks a dashboard filter. A new product tier gets added and nobody updates the “active customer“ definition. A marketing campaign uses a different form that doesn’t trigger the MQL score.

Treat definitions like code:

Step 5: Build Shared Dashboards With Shared Ownership

Instead of marketing dashboards and sales dashboards, build revenue dashboards that both teams own:

The Revenue Funnel Dashboard

Stage Volume Conversion to Next Avg Time in Stage Owner
Website visitor 12,400 3.2% to lead Marketing
Lead 397 42% to MQL 6 days Marketing
MQL 167 31% to SQL 4 days Marketing + Sales
SQL 52 58% to Opportunity 3 days Sales
Opportunity 30 27% to Closed Won 34 days Sales
Closed Won 8 Sales + CS

When everyone looks at the same funnel with the same definitions, the conversation shifts from “your numbers are wrong“ to “conversion from MQL to SQL dropped this month — what changed?“

That’s a productive conversation. That’s the conversation that drives revenue.

The Governance Model

Fixing this once isn’t enough. You need a lightweight governance model to keep it fixed:

Activity Frequency Owner
Definition review Quarterly RevOps lead
Dashboard audit Monthly RevOps + Data team
Cross-team metrics review Bi-weekly Marketing + Sales + CS leads
New metric approval As needed RevOps (gatekeeper)
Tool change impact assessment Before any tool added/removed RevOps + Data team

RevOps is the natural owner here. They sit between marketing, sales, and CS. Their job is to ensure everyone is operating from the same playbook with the same numbers.

Signs It’s Working

You’ll know you’ve solved the data trust problem when:

The Bottom Line

Your teams don’t have different numbers because someone is bad at their job. They have different numbers because they’re using different definitions, different tools, and different timeframes to measure the same things.

The fix is straightforward but requires discipline: agree on definitions, expose the logic, build shared reporting, and govern it over time.

When everyone trusts the numbers, you stop debating what’s true and start deciding what to do about it. That’s where revenue growth comes from.

The team we mentioned at the start — the one that spent 30 minutes reconciling numbers every QBR — implemented a shared reporting layer with exposed definitions. Their first aligned QBR opened with: “MQL-to-SQL conversion dropped from 31% to 22% this month. What changed?“ No debate about the number. Straight into problem-solving. That’s the shift.

More revenue. Fewer hires.

Eru gives your revenue team one place to get answers — with shared definitions, cross-source querying, and full transparency on how every metric is calculated. No more spreadsheet reconciliation. No more “whose number is right.“ See how →

See how Eru reconciles revenue data across your teams automatically.

Book a demo →