Why early warning beats late intervention
Discovering churn at renewal is too late. The customer has already made their decision. The budget has been reallocated. The replacement vendor is in procurement.
Early warning gives you 60-90 days of lead time. That's enough to:
- Address the root cause of dissatisfaction
- Re-engage a disengaged stakeholder
- Demonstrate value that's been overlooked
- Negotiate a save that protects margin
The difference between companies with 85% GRR and 95% GRR isn't better products or better CSMs. It's earlier visibility into risk.
The four signal sources
1. Product usage
What to track:
- Login frequency: How often are users accessing the product?
- Feature adoption: Are they using the features that drive stickiness?
- Usage trends: Is usage growing, stable, or declining over 30/60/90 days?
- User breadth: How many users in the account are active vs. licensed?
Where it lives: Amplitude, Mixpanel, Pendo, Heap, or your own product analytics.
Key insight: Absolute usage level matters less than the trend. A customer who uses your product lightly but consistently is healthier than one whose heavy usage is declining.
2. Support interactions
What to track:
- Ticket volume: Is the account submitting more tickets than usual?
- Resolution time: Are their issues being resolved quickly?
- Repeat issues: Are they hitting the same problem multiple times?
- Sentiment: Is the tone of communications shifting negative?
- Escalation: Are they asking for managers or threatening to leave?
Where it lives: Zendesk, Intercom, Freshdesk, or your support platform.
Key insight: A single frustrated ticket isn't a churn signal. A pattern of unresolved issues over 30 days is.
3. Relationship health
What to track:
- CSM engagement: Are they responding to check-ins? Attending meetings?
- Email open/response rates: Are they engaging with communications?
- Champion status: Is your main contact still at the company? Still in the same role?
- NPS/CSAT scores: What's the most recent sentiment data?
Where it lives: Salesforce, HubSpot, Gainsight, Outreach, or your CRM.
Key insight: The most dangerous signal is silence. A customer who stops responding entirely is often further down the churn path than one who complains.
4. Commercial signals
What to track:
- Payment failures: Are invoices going unpaid or payments failing?
- Contract term changes: Did they switch from annual to monthly?
- Downgrade requests: Are they asking to reduce their plan or seats?
- Cancellation questions: Have they asked about cancellation terms or process?
Where it lives: Stripe, Chargebee, Recurly, or your billing system.
Key insight: Commercial signals are often late-stage indicators. By the time someone asks about cancellation terms, the decision may already be made. Use these in combination with earlier signals.
Building the system: three approaches
Approach 1: Manual scoring (spreadsheet)
Best for: Under 100 customers
How it works: CSMs review each account weekly and assign a Green/Yellow/Red status based on their knowledge of usage, support, and engagement. Track in a shared spreadsheet or CRM field.
Pros: Simple, fast to implement, leverages CSM intuition
Cons: Subjective, doesn't scale, depends on CSM attention, no leading indicators for accounts CSMs aren't watching closely
Approach 2: CRM-based scoring
Best for: 100-500 customers
How it works: Build integrations that pull key signals into your CRM (usage data via API, support metrics via integration, billing status via webhook). Create a scoring formula in your CRM that weights these signals and produces a health score per account.
Pros: More objective, scalable, integrates with existing workflows
Cons: Requires engineering effort to build integrations, CRM formulas are limited, data freshness depends on sync frequency
Approach 3: Customer data platform
Best for: 500+ customers
How it works: Use a dedicated platform that connects all signal sources, builds a unified health model, applies machine learning to weight signals based on historical outcomes, and surfaces risk scores with explanations.
Pros: Most accurate, fully automated, learns from outcomes, scales indefinitely
Cons: Cost, implementation time, requires historical data to train models
The scoring model
Regardless of approach, the scoring model should weight signals based on their predictive power:
| Signal Category | Weight | Key Metrics |
|---|---|---|
| Product Usage | 30% | Login trend, feature adoption, user breadth |
| Support Health | 20% | Ticket pattern, resolution, sentiment |
| CSM Engagement | 20% | Response rate, meeting attendance, NPS |
| Champion Status | 15% | Contact stability, role changes, departures |
| Billing Health | 15% | Payment status, term changes, downgrades |
Score each signal category 0-100 based on the underlying metrics. Multiply by weight. Sum for a composite score.
Risk tiers and actions
| Score | Tier | Action |
|---|---|---|
| 80-100 | Green | Standard engagement, expansion opportunities |
| 60-79 | Yellow | Increased check-in frequency, value reinforcement |
| 40-59 | Orange | CSM escalation, executive sponsor engagement, recovery plan |
| 0-39 | Red | Immediate intervention, save playbook, leadership involvement |
Making it actionable
Playbooks by tier
Each risk tier should have a documented playbook — a specific set of actions that trigger when an account enters that tier. The playbook should include who owns the action, what they do, and when.
Alerts
Risk score changes should trigger alerts through channels your team already uses:
- Slack: Alert the account owner when a score drops below a threshold
- Email: Weekly digest of accounts that changed tier
- CRM tasks: Automatically create follow-up tasks for at-risk accounts
Track outcomes
Measure the effectiveness of your system by tracking:
- What percentage of churned accounts were flagged as at-risk?
- How much lead time did the system provide?
- What's the save rate for flagged accounts?
- Are false positives (flagged but didn't churn) at an acceptable level?
Common mistakes
- Too many signals: Start with 5-7 signals that are clearly predictive. Adding more noise doesn't improve accuracy — it dilutes it.
- Equal weighting: Not all signals are equally predictive. Usage decline is typically more predictive than NPS score. Weight based on historical correlation with churn.
- Infrequent scoring: Monthly scoring misses fast-moving risk. Score at least weekly. Daily is better.
- No ownership: A risk score without an owner is just a number. Every at-risk account needs a person responsible for the save.
- No feedback loop: If you don't track which signals actually predicted churn, you can't improve your model. Build in retrospective analysis quarterly.
What good looks like
A mature early warning system delivers:
- 60+ day lead time: You know about churn risk 2+ months before renewal
- 70%+ accuracy: The majority of churned accounts were flagged in advance
- Clear ownership: Every at-risk account has an owner and a plan
- Documented playbooks: Responses to risk are systematic, not ad hoc
- Measurable save rate: You know how many at-risk accounts you successfully retained
The build vs. buy decision
Building an early warning system is absolutely possible with your existing stack. The question is whether it's the best use of your team's time.
Building requires: engineering time for integrations, ongoing maintenance as tools change, data science expertise for scoring models, and operational discipline to keep it running.
The question to ask yourself: is churn prediction a core competency you want to build, or a capability you want to leverage so your team can focus on actually saving customers?
See your churn risk across every account, starting today.
Book a churn audit →