The forecast problem
Every quarter, the same ritual plays out. Finance asks RevOps for a retention forecast. RevOps pulls last quarter's churn and expansion rates, applies them to this quarter's renewal cohort, and delivers a number.
Then reality happens. Actual NRR comes in 6 points lower than forecast. A key account churned that nobody saw coming. Expansion deals slipped. Contraction happened in accounts that looked healthy.
The model wasn't wrong mathematically. It was wrong structurally. It treated every account the same, ignored leading indicators, and assumed the future would look like the past.
Why simple retention rates fail
Applying a flat retention rate to your renewal cohort has three fatal flaws:
- It ignores account-level variation. Your 95% gross retention rate is an average. Some accounts are at 99% probability of renewing. Others are at 60%. Treating them the same means you're wrong about both.
- It ignores leading indicators. Last quarter's churn rate tells you nothing about this quarter's risk. The accounts renewing this quarter have different usage patterns, different support histories, different champion stability than last quarter's cohort.
- It ignores expansion and contraction dynamics. NRR isn't just about who stays. It's about who expands, who contracts, and by how much. A flat rate misses the mechanics entirely.
What a real NRR forecast requires
1. Account-level risk scoring
Instead of applying a single retention rate, score each account individually based on observable signals:
- Usage trends: Is product usage increasing, stable, or declining?
- Support tickets: Are there unresolved issues, escalations, or sentiment shifts?
- Engagement: Is the customer responsive to CSM outreach? Attending QBRs?
- Champion stability: Is your main contact still there? Still engaged?
- Billing signals: Any failed payments, disputes, or downgrade requests?
- Contract signals: Did they shorten their term? Ask about cancellation?
Each signal contributes to a composite risk score that's specific to each account, updated regularly.
2. Expansion and contraction signals
NRR has three components: gross retention, contraction, and expansion. Your forecast needs to model all three:
- Gross retention: Which accounts are likely to churn entirely?
- Contraction: Which accounts are likely to downgrade or reduce seats?
- Expansion: Which accounts are likely to add seats, upgrade, or buy more?
Expansion signals include seat utilization approaching limits, feature adoption suggesting upsell readiness, and positive engagement patterns. Contraction signals include underutilization, team size reductions, and budget conversations.
3. Time-based weighting
Recent signals matter more than old ones. A usage decline last week is more predictive than a usage spike three months ago. Your scoring model should weight recent data more heavily and decay older signals.
The forecast framework
Step 1: Segment your renewal cohort by risk
Take every account renewing in the forecast period and assign a risk tier based on your scoring model:
| Risk Tier | Score Range | Description |
|---|---|---|
| Green | 80-100 | Healthy, engaged, expanding |
| Yellow | 60-79 | Stable but not growing, minor concerns |
| Orange | 40-59 | Declining engagement, multiple risk signals |
| Red | 0-39 | Active churn risk, intervention needed |
Step 2: Apply tier-specific retention assumptions
Instead of one retention rate, apply different assumptions per tier:
| Risk Tier | Expected Retention | Expected Expansion | Expected Contraction |
|---|---|---|---|
| Green | 98% | 15-25% | 2% |
| Yellow | 90% | 5-10% | 8% |
| Orange | 75% | 2% | 15% |
| Red | 50% | 0% | 25% |
Step 3: Calculate weighted NRR
For each tier, calculate the expected revenue outcome:
Tier NRR = (ARR × Retention Rate) + (ARR × Expansion Rate) - (ARR × Contraction Rate)
Sum across tiers and divide by total beginning ARR to get your forecasted NRR.
Step 4: Validate and iterate
After each quarter, compare your forecast to actuals by tier. Were your risk scores accurate? Were your tier assumptions right? Adjust both the scoring model and the tier assumptions based on outcomes.
The data connection problem
The framework above sounds straightforward. The hard part is getting the data.
Account-level risk scoring requires data from multiple systems:
- Product usage from Amplitude or Mixpanel
- CRM data from Salesforce or HubSpot
- Support data from Zendesk or Intercom
- Billing data from Stripe or Chargebee
- Engagement data from Outreach or Gainsight
Most companies face a choice: do it manually in spreadsheets (doesn't scale), build a data warehouse integration (expensive and slow), or ignore the problem (inaccurate forecasts).
What good looks like
A strong NRR forecasting system delivers:
- Accuracy: Forecast within 2-3 points of actual NRR consistently
- Visibility: Know exactly which accounts are driving the number — up or down
- Actionability: Risk-tiered accounts with clear next steps for each tier
- Accountability: Each at-risk account has an owner and a plan
The real question
NRR forecasts fail because they're built on averages, not signals. They treat every account the same and assume the future will match the past.
Better forecasting isn't about better math. It's about better inputs. Account-level signals, updated regularly, connected across your stack.
The question isn't whether your NRR forecast is right. It's whether you have the signal infrastructure to make it right.
Frequently Asked Questions
What is the best NRR forecasting software for SaaS?
The best NRR forecasting tools for SaaS include Eru (AI-powered revenue intelligence that connects billing, CRM, support, and product data for account-level NRR forecasting), Gainsight (enterprise customer success platform with retention scoring), ChurnZero (mid-market churn prediction), and Baremetrics (billing-only SaaS metrics). Eru differentiates by reconciling data across Stripe and Salesforce to produce forecasts based on leading indicators rather than lagging averages.
How do you forecast net revenue retention accurately?
Accurate NRR forecasting requires account-level risk scoring using signals from multiple systems (usage trends, support tickets, billing changes, champion stability), tier-specific retention assumptions instead of flat rates, and time-weighted signals where recent data is prioritised. Tools like Eru automate this by connecting your billing, CRM, and product systems to score each account individually.
Why do most NRR forecast models fail?
Most NRR forecast models fail because they apply a flat retention rate to all accounts, ignore leading indicators like usage declines and support spikes, and don’t model expansion and contraction dynamics separately. A better approach segments the renewal cohort by risk tier using account-level signals and applies different retention, expansion, and contraction assumptions per tier.
Eru scores every account daily based on signals across your stack — so your NRR forecast is built on leading indicators, not lagging assumptions.
Book a churn audit →