Why retention metrics fail
Most retention dashboards gather dust. Here's why:
- Too many metrics — 40 charts on a dashboard means no one knows what matters
- Lagging only — showing what already happened, not what's about to happen
- No owner — metrics exist but no one is accountable for moving them
- Wrong granularity — company-level metrics that don't help individual CSMs or AEs take action
- Stale data — refreshed weekly or monthly when the business moves daily
The result: the data team builds something comprehensive, stakeholders glance at it once, and everyone goes back to gut feel.
The retention metrics hierarchy
Effective retention metrics operate at three layers, each serving a different audience with different needs.
Layer 1: Executive metrics
Audience: CEO, CFO, Board
Cadence: Monthly
- Net Revenue Retention (NRR)
- Gross Revenue Retention (GRR)
- Logo Retention Rate
- Churned ARR
- Contraction ARR
- Expansion ARR
Design principles: Keep to 6-8 metrics maximum. Show trends over time (not just current state). Include industry benchmarks for context. Make it scannable in under 60 seconds.
Layer 2: Operational metrics
Audience: RevOps, CS Leadership
Cadence: Weekly/Monthly
- Renewal rate by segment
- Churn by root cause
- Time-to-value by cohort
- At-risk ARR (current pipeline)
- Save rate (at-risk accounts saved)
- CS coverage ratio
- Cohort retention curves
Design principles: Enable drill-down from summary to segment to account. Highlight changes and anomalies. Connect to action: what should we do differently?
Layer 3: Account-level metrics
Audience: CSMs, AEs
Cadence: Daily
- Health score (composite)
- Usage trend (7-day, 30-day)
- Last CSM touch date
- Open support tickets
- Champion status
- Renewal date and likelihood
- Expansion signals
Design principles: Optimise for action, not analysis. Alert on meaningful changes, not noise. Show what to do, not just what's happening. Surface in the tools people already use (CRM, Slack).
Building the health score
Inputs that matter
| Category | Weight | Example Signals |
|---|---|---|
| Product usage | 35% | DAU/MAU ratio, feature breadth, usage trend, time in product |
| Engagement | 25% | QBR attendance, email response rate, support interactions, training completion |
| Support | 20% | Ticket volume trend, CSAT scores, escalation rate, resolution time |
| Commercial | 20% | Payment history, contract terms, expansion history, renewal timing |
Scoring approach
Option 1: Rule-based scoring
Define thresholds for each signal and assign points. For example: DAU/MAU > 40% = 10 points, 20-40% = 6 points, < 20% = 2 points.
- Pros: Transparent, easy to explain, quick to implement, easy to adjust
- Cons: Requires manual calibration, may miss complex interactions between signals
Option 2: Model-based scoring
Train a model on historical churn/retention data to predict renewal probability.
- Pros: Captures complex patterns, improves over time, data-driven weights
- Cons: Requires sufficient historical data, harder to explain, needs ongoing maintenance
Recommendation: Start rule-based. Get buy-in and prove value. Then layer in model-based scoring once you have enough data and organizational trust in the system.
Calibration
Validate your health score against actual outcomes. If your "healthy" tier is churning at 8% and your "at-risk" tier is churning at 12%, the score isn't differentiating well enough.
Expected churn rates by health tier:
- Healthy: <3% annual churn
- Neutral: 5-10% annual churn
- At-risk: 20-40% annual churn
Track correlation monthly and adjust weights when the score drifts from reality.
Getting metrics used
1. Start with the decision, not the dashboard
| Decision | Metric Needed |
|---|---|
| "Which accounts need attention today?" | Health score changes, usage drops, missed check-ins |
| "Should we invest in onboarding redesign?" | Time-to-value by cohort, early churn rate |
| "Is our retention getting better or worse?" | Cohort curves, GRR trend, at-risk pipeline |
| "Where should we add CS headcount?" | Coverage ratio by segment, churn by ratio |
| "Which product gaps are causing churn?" | Churn by root cause, feature request correlation |
2. Embed in workflows
| Instead of... | Do this... |
|---|---|
| Dashboard that CSMs check daily | Slack alert when health score drops below threshold |
| Monthly retention report | Weekly email with top 5 at-risk accounts and recommended actions |
| Quarterly cohort analysis deck | Auto-generated cohort comparison surfaced in team standup |
| Annual churn review | Real-time churn reason tagging with monthly pattern alerts |
3. Assign owners
| Metric | Owner |
|---|---|
| NRR / GRR | VP Customer Success |
| At-risk ARR | CS Leadership |
| Health score accuracy | RevOps / Data Team |
| Time-to-value | Onboarding Lead |
| Save rate | CS Leadership |
| Account health (individual) | Assigned CSM |
4. Close the feedback loop
Track whether your predictions were right. When a health score said "at-risk" and the customer renewed anyway, understand why. When a "healthy" account churned, figure out what you missed.
This feedback loop is what makes metrics improve over time. Without it, you're flying blind and never learning.
The data team's checklist
Before shipping any retention metric or dashboard, validate against these seven checks:
- Decision-driven — Does this metric serve a specific decision someone needs to make?
- Owner assigned — Is someone accountable for this metric moving in the right direction?
- Right granularity — Is it at the right level (executive, operational, or account) for its audience?
- Fresh enough — Is the refresh cadence fast enough for the decisions it supports?
- Actionable — When this metric changes, is it clear what to do next?
- Calibrated — Does the metric correlate with actual outcomes? Have you validated?
- Embedded — Is it surfaced in the tools and workflows people already use?
The bottom line
The difference between metrics that gather dust and metrics that drive action comes down to three traits:
- They serve a specific decision — not general awareness, but a concrete choice someone faces regularly.
- They're embedded in workflows — surfaced where and when people need them, not locked in a dashboard.
- They have owners — someone is accountable for the metric and empowered to act on it.
Build for these three traits and your retention metrics will actually get used.
See how Eru delivers account-level health scores your team will actually use.
Book a call →