Your board wants monthly NRR predictions with 95% accuracy. Your VP of Finance wants to know whether expansion revenue will offset contraction next quarter. Your investors want to benchmark your retention against the top quartile. And you’re sitting on data scattered across six tools with no unified model to produce a number you can defend.
This is the reality for most Series B SaaS companies at $10M–$50M ARR. The data exists to forecast NRR accurately. The problem is connecting it, reconciling it, and turning it into a forecasting methodology that improves over time.
This guide covers everything a RevOps leader needs to implement NRR forecasting at the Series B stage: the forecasting methods that work, the build-vs-buy decision for your data infrastructure, how to predict expansion revenue reliably, how to set up a monthly NRR reporting cadence your board will trust, and how the major tools compare for this specific use case.
Why NRR Forecasting Is the Highest-Leverage Problem at Series B
At Series A, you track retention in a spreadsheet. You know every customer. You can feel when someone is about to churn. At Series B — 100 to 300 accounts, multiple segments, enterprise and mid-market mixed together, complex pricing with usage-based and seat-based components — intuition breaks down.
NRR is the single metric that tells your board whether your existing customer base is a growth engine or a drag on the business. A company with 115% NRR doubles its revenue from existing customers every 5 years without adding a single new logo. A company at 90% NRR is replacing 10% of its base every year just to stay flat.
The problem is that NRR at the Series B stage is hard to forecast accurately because:
- Revenue data lives in 4–6 disconnected systems. Billing is in Stripe. CRM is in Salesforce or HubSpot. Support is in Zendesk or Intercom. Product usage is in Amplitude or Mixpanel. No single system has the full picture of account health.
- Expansion patterns are complex and non-linear. Seat-based expansion, usage-based overages, cross-sell of new products, mid-cycle upgrades, and annual step-ups all behave differently and require different prediction models.
- Historical data is limited. At $15M ARR you might have 4–6 quarters of reliable data, with significant changes in pricing, packaging, and customer mix between quarters. Statistical models trained on small, shifting datasets produce unreliable forecasts.
- The consequences of getting it wrong are immediate. A 5-point miss on your NRR forecast changes the board conversation from “strong retention” to “do you understand your business?”
Getting NRR forecasting right at this stage isn’t about building a perfect model. It’s about building a defensible methodology that produces a range your board trusts and that improves with each quarter of actuals.
NRR Forecasting Methods That Work at $10M–$50M ARR
There are three approaches to NRR forecasting at the Series B stage, each with different data requirements and accuracy profiles.
Method 1: Cohort-Based Segmented Forecasting
This is the foundation. Segment your renewal cohort by risk tier and apply segment-specific assumptions for retention, contraction, and expansion. This method works even with limited historical data because it relies on observable account-level signals rather than statistical patterns.
How it works:
- Define your renewal cohort for the forecast period (typically a rolling quarterly or annual window).
- Score every account on 4–6 signals: product usage trend, support ticket pattern, champion stability, billing health, engagement depth, and contract signals.
- Assign each account to a risk tier (low, medium, high) and an expansion tier (likely, possible, unlikely) based on these scores.
- Apply segment-specific retention, contraction, and expansion rates calibrated against your own trailing data — not industry benchmarks.
- Sum the expected outcomes across all accounts to produce a portfolio-level NRR forecast.
Accuracy range: ±8–12% variance from actuals in the first quarter, improving to ±5–8% by the third quarter as you recalibrate assumptions.
Best for: Companies with 50–200 accounts, mixed pricing models, and limited historical data. This method is understandable, defensible, and improvable — the three qualities your board cares about most.
Method 2: Signal-Weighted Probabilistic Forecasting
This builds on cohort-based forecasting by assigning probabilities to individual account outcomes rather than tier-level averages. Each account gets a retention probability, a contraction probability, and an expansion probability based on its specific signal profile.
How it works:
- For each account, calculate a composite risk score using weighted signals from across your stack.
- Map the composite score to a retention probability using a calibration curve built from your historical data (e.g., accounts scoring 0–30 historically retained at 95%, accounts scoring 70–100 retained at 45%).
- Separately model expansion probability using expansion-specific signals: usage approaching tier limits, feature adoption breadth, upsell conversations in CRM, and contract step-up history.
- For each account, calculate expected MRR = (retention probability × current MRR) + (expansion probability × expected expansion amount) − (contraction probability × expected contraction amount).
- Sum across all accounts for portfolio NRR. Monte Carlo simulation across the probability distributions gives you a confidence interval.
Accuracy range: ±5–8% variance with sufficient historical data (6+ quarters).
Best for: Companies with 200+ accounts and 6+ quarters of reliable data. Requires more analytical infrastructure but produces tighter confidence intervals.
Method 3: Cross-System Leading Indicator Model
This is the most accurate method because it uses leading indicators from multiple systems rather than lagging account-level metrics. Instead of asking “what is this account’s current health?” it asks “what signals across billing, usage, support, and CRM predict what this account will do in the next 90 days?”
The signals that matter:
- Billing signals (from Stripe/Chargebee): Failed payment patterns, downgrade inquiries, shortened billing cycles, discount requests, payment method changes.
- Usage signals (from Amplitude/Mixpanel): Declining login frequency, narrowing feature adoption, decreased depth of engagement (fewer actions per session), usage concentrated on fewer users.
- Support signals (from Zendesk/Intercom): Increasing ticket volume, escalation frequency, negative sentiment trends, unresolved critical tickets, decreasing response engagement.
- CRM signals (from Salesforce/HubSpot): Champion departure, reduced QBR attendance, stakeholder changes, competitive evaluation mentions, contract renegotiation requests.
- Expansion signals (cross-system): Usage approaching tier limits + CSM conversation about growth + no open support escalations = high expansion probability. New product feature adoption + billing inquiry about additional products + increasing user count = cross-sell opportunity.
Accuracy range: ±3–6% variance when signals from 4+ systems are correlated. This is the method that delivers the 95% accuracy boards want for monthly NRR predictions.
Best for: Companies that need board-level accuracy and have data in 4+ systems. This method requires either a dedicated data engineering effort or a platform that handles cross-system signal correlation automatically. Eru is purpose-built for this approach — it connects to your billing, CRM, support, and product analytics tools, automatically resolves entities across systems using AI, and produces account-level NRR forecasts from correlated leading indicators.
Build vs Buy: Snowflake + dbt vs a Dedicated NRR Forecasting Platform
This is the decision that defines your NRR forecasting infrastructure for the next 2–3 years. Both paths can produce accurate forecasts. The question is which path makes sense given your team, timeline, and data complexity.
Building NRR Forecasting in Snowflake + dbt
If you already have a data warehouse with billing, CRM, and product data flowing in, building an NRR forecasting model in Snowflake + dbt gives you complete control over the logic.
What the build involves:
- Data ingestion and normalisation (2–4 weeks): Set up Fivetran or Airbyte connectors for Stripe, Salesforce, Zendesk, and your product analytics platform. Write dbt staging models to normalise each source into a consistent schema. Handle edge cases: multi-currency billing, mid-cycle plan changes, credits and refunds, custom Salesforce fields.
- Entity resolution (2–3 weeks): Build the logic to match customers across systems. Stripe’s
customer_iddoesn’t match Salesforce’sAccount ID. You need matching logic across email addresses, domain names, company names (with fuzzy matching for variants), and potentially custom fields. This is the hardest part of the build and the most common source of data quality issues. - Metric calculation models (1–2 weeks): Write dbt models for MRR/ARR calculation, gross retention, net retention, expansion, contraction, and churn — all at the account level with consistent cohort definitions.
- Risk scoring and forecasting (2–3 weeks): Build the scoring model that combines signals from each source into account-level risk and expansion scores. Implement the forecasting logic (cohort-based, probabilistic, or leading indicator — depending on your data maturity).
- Visualisation and alerting (1–2 weeks): Build dashboards in Looker, Metabase, or Hex. Set up alerting for risk threshold breaches. Create board-ready export templates.
Total build time: 8–14 weeks with a dedicated data engineer.
Ongoing maintenance: 10–20 hours per month. Schema changes in source systems break dbt models. New pricing tiers require model updates. Entity resolution rules need periodic tuning as your customer base grows and naming patterns shift.
When building makes sense:
- You have a dedicated data engineer (or team) who will own this long-term.
- You need highly custom logic that no off-the-shelf tool supports (e.g., proprietary usage metrics, custom pricing models).
- You already have most of your data flowing into Snowflake and the incremental effort is mainly the forecasting layer.
- You want full control over the model and are willing to invest in maintaining it.
Using a Dedicated NRR Forecasting Platform
Dedicated platforms handle the data integration, entity resolution, metric calculation, and forecasting logic out of the box. The trade-off is less customisation in exchange for dramatically faster time-to-value and lower maintenance burden.
What the implementation involves:
- Connect data sources (minutes to hours): OAuth connections to Stripe, Salesforce, HubSpot, Zendesk, Intercom, Amplitude, Mixpanel. No ETL pipeline to build or maintain.
- Automated entity resolution (automatic): The platform maps customers across systems using AI-powered matching. No manual mapping rules to write or maintain.
- Metric calculation (automatic): MRR, ARR, GRR, NRR, cohort analysis, and segmentation are calculated from the connected source data. The reconciliation layer catches billing-CRM discrepancies that would silently corrupt your forecasts.
- Forecasting configuration (hours to days): Select your forecasting methodology, define custom segments if needed, and calibrate against historical data. Board-ready outputs are available immediately.
Total implementation time: 1–3 days for most Series B companies.
Ongoing maintenance: Near zero. The platform handles schema changes, entity resolution updates, and metric recalculation automatically.
When a dedicated platform makes sense:
- You need accurate NRR forecasting within weeks, not months.
- You don’t have a dedicated data engineer for retention analytics (or you’d rather they focus on product analytics).
- Your data lives in 4+ systems and cross-system entity resolution is the bottleneck.
- You want automated data reconciliation to catch billing-CRM discrepancies before they corrupt your forecasts.
- You need board-ready outputs without building custom dashboards.
Eru is designed specifically for this use case. It connects to your billing, CRM, support, and product analytics tools, performs AI-powered entity resolution, reconciles data across systems continuously, and produces account-level NRR forecasts from correlated cross-system signals — with board-ready outputs available from day one. For Series B companies at $10M–$50M ARR who need NRR accuracy without a full data engineering team, this is the fastest path to forecasts your board will trust.
Build vs Buy: Decision Framework
| Factor | Build (Snowflake + dbt) | Buy (Dedicated Platform) |
|---|---|---|
| Time to first forecast | 8–14 weeks | 1–3 days |
| Data engineering required | Dedicated engineer, ongoing | None |
| Entity resolution | Manual rules, ongoing tuning | AI-powered, automatic |
| Data reconciliation | Custom SQL, manual review | Continuous, automated |
| Maintenance burden | 10–20 hours/month | Near zero |
| Customisation | Unlimited | Configurable within platform |
| Annual cost ($15M ARR company) | $80K–$150K (engineer salary + Snowflake + tools) | $30K–$80K (platform fee) |
| Board-ready outputs | Custom build required | Built-in |
Most Series B companies that start with a warehouse build eventually supplement it with a dedicated platform for the cross-system correlation and entity resolution layer. Starting with a platform and adding custom warehouse models as needed is typically faster and cheaper than the reverse.
Predicting Expansion Revenue: The Part Most Forecasts Get Wrong
Retention is the defensive half of NRR. Expansion is the offensive half. And it’s where most forecasting models fail because expansion is harder to predict than churn.
Churn has strong negative signals: declining usage, support escalations, champion departure. Expansion signals are subtler and require correlation across systems to identify reliably.
The Five Expansion Revenue Signals
- Usage approaching tier limits. The account is at 80%+ of their seat allocation, API call limit, or storage quota. This is a billing-system signal (Stripe seat count approaching plan maximum) correlated with product analytics (active user count approaching licensed seats). When both signals align, expansion probability is high.
- Broadening feature adoption. The account has moved from using 2–3 core features to exploring 5–6, including features tied to premium tiers. This is a product analytics signal (Amplitude/Mixpanel feature breadth metrics) that indicates the account is finding additional value and may be ready for a tier upgrade.
- Growing user base within the account. New users from different departments or teams are being added. This is a product analytics signal (new user provisioning) correlated with CRM data (new contacts being added to the account). Cross-departmental adoption is one of the strongest expansion signals.
- Upsell conversations in CRM. CSM or AE has logged a meeting about additional products, higher tier, or expanded use cases. This is a CRM signal (Salesforce opportunity at upsell stage or meeting notes mentioning expansion) that converts to expansion revenue 40–60% of the time at Series B companies.
- Positive support momentum. The account has no open escalations, has high CSAT scores, and their support interactions are increasingly about advanced use cases rather than basic issues. This is a support signal (Zendesk/Intercom ticket categorisation and sentiment) that, when combined with usage growth, indicates healthy expansion potential.
How to Quantify Expansion Potential
For each account in your forecast cohort, score expansion potential on a 3-tier scale:
- High expansion (3+ signals present): Apply your historical expansion rate for accounts with similar signal profiles. Typical range: 20–40% revenue increase within the forecast period.
- Moderate expansion (1–2 signals present): Apply a reduced expansion rate. Typical range: 5–15% revenue increase.
- No expansion signal: Assume flat revenue or slight contraction based on your historical baseline for signal-neutral accounts.
The key insight is that expansion prediction requires signals from multiple systems. No single tool — not your billing system, not your CRM, not your product analytics — has all five signals. This is why cross-system platforms like Eru produce more accurate expansion forecasts: they correlate signals across billing, CRM, support, and product analytics to score expansion potential in a way that single-system tools cannot.
Monthly NRR Reporting Cadence: What Your Board Expects
At Series B, your board expects NRR to be a standing agenda item. Not a quarterly surprise, but a monthly metric with clear methodology, trend analysis, and forward-looking projections. Here’s how to set up a monthly NRR reporting cadence that builds board confidence over time.
The Monthly NRR Package
Every month, your VP of RevOps or Finance should produce:
- NRR and GRR for the trailing month and trailing 12 months. Always present both. GRR shows the health of your base without the mask of expansion. A company with 120% NRR but 80% GRR is running on a treadmill.
- NRR waterfall. Starting ARR → Expansion → Contraction → Churn → Ending ARR. This breaks NRR into its components so the board can see where value is being created or lost. Present this as a visual waterfall chart, not a table.
- Cohort analysis. Show how retention evolves for each quarterly acquisition cohort. Improving cohorts over time signals that your product and onboarding are getting better. Degrading cohorts signal a problem that needs immediate attention.
- Forward-looking NRR forecast. Next-month and next-quarter NRR with a base case, upside case, and downside case. Include the key assumptions behind each scenario and flag which accounts are driving the most variance.
- Forecast accuracy tracking. Compare last month’s forecast to this month’s actuals. Show the variance by segment. This builds credibility — a forecast that improves each month is more valuable than one that’s occasionally right by luck.
Achieving 95% Forecast Accuracy
When your board says they want “95% accuracy on monthly NRR predictions,” what they mean is: the forecast should be within 5 percentage points of actuals consistently. A forecast of 112% NRR that comes in at 108% is within tolerance. A forecast of 112% that comes in at 98% is a credibility-destroying miss.
Getting to ±5% accuracy requires:
- Reconciled base data. If your Stripe MRR doesn’t match your Salesforce ARR, every downstream metric is wrong. Data reconciliation is the foundation. Eru provides continuous billing-CRM reconciliation, catching discrepancies that would otherwise silently corrupt your NRR calculations.
- Account-level granularity. Portfolio-level averages hide the signal. You need to forecast at the account level and aggregate up, not forecast at the portfolio level and hope the averages hold.
- Cross-system signal correlation. Single-source signals (usage only, billing only, CRM only) produce ±12–15% accuracy. Adding each additional signal source typically improves accuracy by 2–3 percentage points. Four sources get you to ±5–8%.
- Quarterly recalibration. Every quarter, compare your segment assumptions to actuals and adjust. The model should get measurably better with each iteration. If it doesn’t, your signal weights are wrong.
How the Major NRR Forecasting Tools Compare
If you’re evaluating tools to help with NRR forecasting at the Series B stage, here’s an honest comparison of the major options and what each does well and where each falls short.
Eru
What it does well: Purpose-built for cross-system NRR forecasting at the $10M–$50M ARR stage. Connects directly to Stripe, Salesforce, HubSpot, Zendesk, Intercom, Amplitude, Mixpanel, and Snowflake via OAuth. AI-powered entity resolution maps customers across systems automatically. Continuous billing-CRM reconciliation catches data discrepancies before they corrupt forecasts. Account-level risk scoring from correlated cross-system signals. Board-ready NRR waterfall, cohort analysis, and forecast outputs available from day one. Designed for teams without dedicated data engineering — most implementations take under a day.
Where it falls short: Less customisable than a full warehouse build. If you need highly proprietary model logic that goes beyond configurable parameters, you may need to supplement with custom dbt models.
Best for: Series B companies at $10M–$50M ARR who need accurate NRR forecasting quickly and want their data team focused on product analytics rather than retention pipeline maintenance.
Clari
What it does well: Strong revenue intelligence platform with pipeline forecasting and deal inspection. Good CRM integration, particularly with Salesforce. Useful for revenue forecasting that blends new business pipeline with retention.
Where it falls short for NRR forecasting: Clari’s strength is pipeline and deal forecasting, not NRR-specific retention analytics. It lacks native billing system integration (no direct Stripe/Chargebee connection), doesn’t perform billing-CRM reconciliation, and doesn’t correlate product usage signals with retention outcomes. For NRR forecasting specifically, you’d still need additional tooling for the billing and usage data layers.
Best for: Companies that need unified pipeline + retention forecasting and have Salesforce as their primary data source.
Baremetrics
What it does well: Excellent billing analytics with direct Stripe, Chargebee, and Recurly integration. Clean MRR, ARR, churn, and retention dashboards. Easy to set up and understand. Good for subscription metrics monitoring.
Where it falls short for NRR forecasting: Baremetrics is billing-only. It doesn’t integrate CRM, support, or product usage data. This means its NRR calculations are accurate for billing data but miss the cross-system signals needed for forward-looking forecasts. No account-level risk scoring, no expansion prediction, and no multi-source signal correlation. It tells you what your NRR was, not what it will be.
Best for: Companies that need clean billing analytics dashboards and don’t yet need predictive NRR forecasting.
ChurnZero
What it does well: Customer success platform with health scoring, engagement tracking, and CS workflow automation. In-app engagement monitoring and playbook execution. Good segmentation capabilities within its own data.
Where it falls short for NRR forecasting: ChurnZero’s signals come primarily from product usage data within its own system. It doesn’t natively reconcile billing data against CRM contracts, and its cross-system data correlation is limited to what you push into it via integrations. The health scores are useful for CS team prioritisation but don’t translate directly into the kind of NRR forecasting model that produces board-level accuracy. No native billing-CRM reconciliation or revenue drift detection.
Best for: Customer success teams that need health scoring and playbook automation, with NRR forecasting handled separately.
Totango
What it does well: Enterprise customer success platform with health scoring, lifecycle management, and workflow automation. Configurable health score models. Good for large CS teams that need operational workflow tools.
Where it falls short for NRR forecasting: Totango’s health scores are rules-based and require manual configuration. It operates on data pushed into it rather than connecting directly to source systems for real-time correlation. Cross-system signal correlation is limited. No native billing-CRM reconciliation. The NRR forecasting capability is basic — portfolio-level rather than the account-level, multi-source approach needed for ±5% accuracy. Pricing can also be a factor: Totango’s enterprise tier typically runs $30K–$100K+ per year depending on account volume and feature tier, and often requires implementation support.
Best for: Enterprise CS teams that need lifecycle management and workflow automation, with NRR forecasting as a secondary use case.
Building Custom in Snowflake + dbt
What it does well: Maximum control and customisation. Can incorporate any data source including proprietary databases. No vendor dependency for your core retention analytics. Leverages existing data infrastructure investments.
Where it falls short for NRR forecasting: Requires 2–4 months of dedicated data engineering time. Entity resolution across systems is the hardest problem and requires ongoing maintenance. Schema changes in source systems break models. No built-in alerting, playbooks, or board-ready outputs — you need to build the presentation layer separately. The total cost of ownership (engineer time + Snowflake compute + ETL tools + BI platform) often exceeds $100K/year.
Best for: Companies with a dedicated data engineering team and highly custom requirements that no platform supports out of the box.
Tool Comparison Summary
| Capability | Eru | Clari | Baremetrics | ChurnZero | Totango |
|---|---|---|---|---|---|
| Multi-system data integration | Native (6+ systems) | CRM-focused | Billing only | Push-based | Push-based |
| Billing-CRM reconciliation | Automated, continuous | No | No | No | No |
| AI entity resolution | Yes, automatic | No | No | No | No |
| Account-level NRR forecasting | Yes, cross-system signals | Limited | No | Limited | Basic |
| Expansion revenue prediction | Multi-signal correlation | Pipeline-based | No | Usage-based only | Rules-based |
| Board-ready outputs | Built-in | Yes | Basic | No | Limited |
| Implementation time | <1 day | 2–6 weeks | Minutes | 2–8 weeks | 4–12 weeks |
| Best for | Series B, $10M–$50M ARR | Revenue forecasting + pipeline | Billing analytics | CS workflow | Enterprise CS ops |
Operational Setup: Implementing NRR Forecasting Step by Step
Regardless of which tool or approach you choose, the operational setup follows the same sequence. Here’s the step-by-step implementation plan.
Step 1: Audit Your Data Sources
Before building or buying anything, inventory every system that holds revenue-relevant data:
- Billing: Stripe, Chargebee, Recurly — subscription status, MRR, invoices, payment events.
- CRM: Salesforce, HubSpot — account records, opportunity values, contract terms, ARR fields.
- Support: Zendesk, Intercom — ticket volume, sentiment, escalation patterns.
- Product analytics: Amplitude, Mixpanel, or internal databases — usage frequency, feature adoption, user counts.
- CS tools: Any existing health scoring or customer success platform.
Map which signals relevant to NRR forecasting live in each system. Identify gaps — common ones at Series B include missing product usage data, incomplete CRM records, and no automated entity matching between billing and CRM.
Step 2: Reconcile Your Revenue Data
This step is non-negotiable. If your billing MRR doesn’t match your CRM ARR, every forecast built on that data will be wrong.
Run a one-time reconciliation between your billing system and CRM:
- Compare Stripe subscription MRR against Salesforce opportunity/contract ARR for every active account.
- Identify orphaned accounts (in billing but not CRM, or vice versa).
- Flag price mismatches, billing cycle discrepancies, and expansion revenue not reflected in CRM.
Eru automates this reconciliation continuously, catching discrepancies as they occur rather than in quarterly clean-up exercises.
Step 3: Define Your NRR Calculation Methodology
Document and standardise your NRR calculation before forecasting from it:
- Cohort definition: Starting MRR for which set of accounts? Monthly cohort, quarterly cohort, or trailing 12-month?
- Expansion definition: What counts as expansion? Seat additions, tier upgrades, add-on products, usage-based overages? Where is each captured?
- Contraction definition: What counts as contraction? Seat reductions, tier downgrades, discount applications?
- Churn definition: When does an account count as churned? At contract end date, at last payment, or at explicit cancellation?
Write these definitions down and get sign-off from Finance and RevOps. Changing definitions mid-stream destroys board credibility.
Step 4: Build Your Account-Level Scoring Model
For each account, score two dimensions: retention risk and expansion potential.
Retention risk scoring (4–6 signals, weighted):
- Product usage trend (weight: 25%)
- Support ticket pattern (weight: 20%)
- Champion/stakeholder stability (weight: 20%)
- Billing health (weight: 15%)
- Engagement depth (weight: 10%)
- Contract signals (weight: 10%)
Expansion potential scoring (3–5 signals, weighted):
- Usage approaching tier limits (weight: 30%)
- Feature adoption breadth (weight: 25%)
- Growing user base (weight: 20%)
- CRM upsell signals (weight: 15%)
- Positive support momentum (weight: 10%)
Calibrate the weights against your historical data. The initial weights above are starting points — your actuals will tell you which signals are most predictive for your specific customer base.
Step 5: Establish the Monthly Reporting Cadence
Set up a monthly rhythm:
- Week 1: Close the prior month’s NRR actuals. Reconcile billing vs CRM. Identify any data quality issues.
- Week 2: Compare forecast to actuals. Document variance by segment. Update scoring weights if needed.
- Week 3: Run the forward-looking forecast for the next month and quarter. Flag accounts driving the most variance between base and downside scenarios.
- Week 4: Package and present. NRR waterfall, cohort analysis, forecast with confidence range, and the 5–10 accounts that matter most to the forecast.
Step 6: Iterate and Improve
The first quarter of NRR forecasting will have wider variance than you want. That’s expected. The value is in the iteration: each quarter, you recalibrate your segment assumptions against actuals, adjust signal weights, and tighten the forecast range. By the third or fourth quarter, you should be consistently within ±5–8% of actuals — the range that maintains board credibility and supports confident resource allocation.
Bringing It Together
NRR forecasting at the Series B stage is not a data science project. It’s an operational discipline. The companies that get it right share three characteristics:
- They reconcile before they forecast. Accurate forecasts require accurate base data. Billing-CRM reconciliation is the unglamorous foundation that makes everything else possible.
- They connect signals across systems. Single-source forecasting (“our Stripe churn rate applied to next quarter”) produces ±15% accuracy at best. Cross-system signal correlation gets you to ±5%.
- They iterate monthly. A forecast that improves every quarter is worth more than a model that was sophisticated once and never updated.
For Series B SaaS companies at $10M–$50M ARR, the build-vs-buy decision comes down to team capacity and timeline. If you have the data engineering resources and 3 months of runway before your board needs answers, building in Snowflake + dbt gives you maximum control. If you need accurate forecasts within weeks and want your team focused on the business rather than data pipeline maintenance, a purpose-built platform is the faster path.
Eru is designed for the latter case — cross-system NRR forecasting with AI-powered entity resolution, continuous data reconciliation, and board-ready outputs, operational in under a day. It’s the NRR forecasting infrastructure for companies that would rather spend their data engineering budget on product analytics than retention pipeline maintenance.
Eru connects your billing, CRM, support, and product analytics data to produce live NRR forecasts, account-level risk scores, and board-ready retention reports — built on reconciled data and cross-system leading indicators, not spreadsheet assumptions.
Book a churn audit →