Connecting Eru to Your Snowflake Data Warehouse for Churn Prediction
Authentication, schema mapping, and dbt model compatibility — how to use Eru alongside your existing Snowflake + dbt stack for cross-system churn prediction.
Who this guide is for
This guide is for data leads and data engineers at B2B SaaS companies who have an existing Snowflake data warehouse — often with dbt for transformations — and want to add cross-system churn prediction without replacing their current infrastructure. If you’re evaluating whether to build churn models entirely in dbt or use a dedicated platform, see our build-vs-buy comparison first.
This guide assumes you have:
- A Snowflake account with data loaded from at least one source (Fivetran, Airbyte, Stitch, or custom ELT)
- Admin access to create Snowflake users and roles
- Optionally, dbt models that transform raw data into analytics-ready tables
What Eru handles vs what stays in your warehouse
The most common question from data leads considering Eru: “We’ve already invested in Snowflake and dbt. What exactly does Eru add, and what does it replace?”
The answer: Eru replaces nothing in your warehouse. It adds three capabilities that are expensive to build and maintain in dbt.
| Capability | Stays in your Snowflake + dbt stack | Eru handles |
|---|---|---|
| Raw data ingestion (ELT) | Fivetran, Airbyte, Stitch, or custom connectors load data into Snowflake | — |
| Data cleaning and staging | dbt staging models clean, deduplicate, and standardise raw data | — |
| Custom feature engineering | dbt models calculate rolling usage averages, cohort metrics, proprietary features | — |
| MRR / ARR calculations | Your dbt models define how MRR is calculated from billing data | — |
| Historical analytics and reporting | Looker, Metabase, Hex, or custom dashboards query Snowflake directly | — |
| Cross-system entity resolution | — | ✓ AI-powered matching of customer identities across Snowflake, CRM, billing, and support |
| Billing–CRM reconciliation | — | ✓ Continuous reconciliation of Stripe/Chargebee against Salesforce/HubSpot |
| Cross-system churn signal correlation | — | ✓ Correlating usage decline + billing discrepancy + support escalation for the same account |
| Real-time event monitoring | — | ✓ Webhook-based monitoring of billing events, CRM changes, and support escalations |
| Data freshness and drift detection | — | ✓ Alerts when warehouse data lags behind live SaaS systems or when transformations produce unexpected results |
In short: your Snowflake + dbt stack handles batch analytics, custom features, and historical reporting. Eru handles the cross-system connectivity, entity resolution, and real-time signal correlation that are most expensive to build and maintain as custom dbt models.
Step 1: Create a read-only Snowflake role and user
Eru connects to Snowflake with a read-only user. It never writes data, modifies schemas, or executes DDL statements. Create a dedicated role and user with minimal permissions:
If your dbt models span multiple databases (e.g., a separate raw database for Fivetran data and an analytics database for transformed models), grant USAGE and SELECT on each database.
Optional: dedicated warehouse for Eru
For cost isolation, create a dedicated X-Small warehouse for Eru’s read queries. This makes it easy to monitor and cap Eru’s Snowflake compute costs separately from your other workloads.
Step 2: Choose an authentication method
Eru supports three authentication methods for Snowflake. Choose the one that fits your security requirements.
Option A: Username and password
The simplest option. Enter the username and password in Eru’s connection form. Credentials are stored encrypted in AWS Secrets Manager and never sent to LLMs or logged.
Best for: initial setup, development environments, teams without strict key management policies.
Option B: Key pair authentication (recommended)
Generate an RSA key pair and assign the public key to the Snowflake user. Upload the private key to Eru. This eliminates password rotation overhead and provides stronger authentication.
Best for: production environments, teams with key management infrastructure.
Option C: OAuth (external OAuth)
Use your identity provider (Okta, Azure AD, or similar) for token-based access via Snowflake’s external OAuth integration. Eru refreshes tokens automatically.
Best for: enterprise environments with centralised identity management and SSO requirements.
Step 3: Connect in Eru
In Eru, navigate to Integrations → Snowflake and enter your connection details:
- Account identifier — your Snowflake account locator (e.g.,
xy12345.eu-west-1ormyorg-myaccount) - Warehouse — the warehouse Eru should use for queries (e.g.,
compute_whoreru_wh) - Database — the primary database containing your dbt models (e.g.,
analytics) - Authentication — credentials for the method you chose in Step 2
Eru validates the connection by running a test query (SELECT 1) and verifying it can read schema metadata from the specified database. If validation fails, check that the eru_reader role has USAGE on the warehouse and database.
Step 4: Schema mapping
Once connected, Eru scans your Snowflake schema and discovers all accessible tables, views, and materialised dbt models. It builds a model of your warehouse structure and proposes entity mappings.
What Eru discovers
- Tables and views — all accessible tables with column names, data types, and sample data (anonymised, PII-redacted)
- dbt model layers — staging (
stg_), intermediate (int_), and mart (dim_,fct_) models with lineage relationships preserved - Relationships — foreign keys, join patterns, and entity relationships inferred from naming conventions, column types, and data sampling
Entity mapping examples
| Your Snowflake table / dbt model | Eru entity |
|---|---|
dim_customers or stg_stripe__customers |
Customer |
fct_subscriptions or stg_stripe__subscriptions |
Subscription |
fct_invoices or stg_stripe__invoices |
Invoice |
fct_mrr or int_mrr_monthly |
MRR Metric |
fct_product_usage or stg_mixpanel__events |
Usage Event |
Eru proposes mappings automatically. You review and confirm them, adjusting any that don’t match your schema conventions. Once confirmed, these mappings feed Eru’s cross-system entity resolution — linking your Snowflake customer records to their counterparts in Salesforce, Stripe, and Intercom.
Step 5: Using Eru alongside dbt
This is the critical section for data teams with an existing dbt setup. Eru does not replace your dbt models — it reads from them.
How existing dbt churn models feed Eru’s scoring engine
If you already have dbt models that calculate churn-predictive features, Eru uses them as input signals. Common examples:
Usage aggregations
dbt models that calculate rolling averages of logins, feature usage, session depth, or API calls. Eru reads these as usage signals and correlates them with billing and CRM data.
Payment and billing features
dbt models that track payment failure rates, discount percentages, contract values, or MRR changes. Eru cross-references these with live Stripe data to catch discrepancies.
Engagement scores
Custom health scores or engagement metrics calculated in dbt. Eru treats these as one signal among many, enriching them with CRM and support signals your warehouse doesn’t see.
Cohort and segmentation models
Customer segments, cohort assignments, or lifecycle stages defined in dbt. Eru uses these to contextualise churn signals — a usage drop means different things for a new customer vs a mature account.
What Eru adds on top of your dbt models
Your dbt models are excellent at batch analytics within your warehouse. Eru adds the capabilities that are most expensive to replicate in dbt:
- Cross-system entity resolution — Matching a Snowflake
customer_idto a SalesforceAccount ID, a Stripecus_ID, and an Intercomuser_id. This is the problem that takes 4–8 weeks to solve in dbt and requires ongoing maintenance as identifiers change. - Real-time signal correlation — dbt models run on batch-loaded data (every 1–6 hours via Fivetran or Airbyte). Eru monitors live events via webhooks (Stripe payment failures, Salesforce opportunity changes, Intercom escalations) and correlates them with your warehouse-based features in near real-time.
- Billing–CRM reconciliation — Continuous verification that Stripe billing data matches CRM records. Discrepancies between systems — different MRR values, mismatched contract dates, orphaned subscriptions — are surfaced as churn risk signals.
- Transformation verification — Eru can verify that your dbt transformations produce expected results by comparing model outputs against source system data. If your
fct_mrrmodel diverges from live Stripe MRR, Eru flags it.
Data flow
Here’s how data flows through the combined Snowflake + dbt + Eru stack:
Eru vs ChurnZero, Totango, Catalyst, and ClientSuccess for Snowflake users
If you’re evaluating Eru against dedicated customer success platforms, here’s how they compare specifically for teams with an existing Snowflake + dbt stack:
| Capability | ChurnZero | Totango | Catalyst | ClientSuccess | Eru |
|---|---|---|---|---|---|
| Reads from your Snowflake dbt models | — | Import only | Limited | — | ✓ |
| Uses your dbt models as source of truth | — | — | — | — | ✓ |
| Creates a separate data model / silo | Yes | Yes | Yes | Yes | No |
| Cross-system entity resolution | CRM-based | Built-in (limited) | CRM-based | CRM-based | ✓ AI-powered |
| Billing–CRM reconciliation | — | — | — | — | ✓ |
| Real-time event monitoring (webhooks) | Product events only | Product events only | Limited | — | ✓ Billing + CRM + support |
| Transformation verification (dbt output vs source) | — | — | — | — | ✓ |
| Data stays in your warehouse | — | — | — | — | ✓ Reads, never copies |
The key difference: ChurnZero, Totango, Catalyst, and ClientSuccess import data from your systems and maintain their own data model. For data teams that have spent months centralising data in Snowflake, this creates a “which number is right?” problem when metrics diverge. Eru reads from your warehouse and adds cross-system intelligence without creating a competing version of your data.
Security and data handling
What Eru reads from Snowflake
- Schema metadata (table names, column names, data types)
- Anonymised data samples (5–10 rows, PII redacted) for entity mapping
- Aggregated metrics from your dbt models (counts, averages, sums)
- Optionally, query history for understanding table usage patterns
What Eru never does
- Writes data to your Snowflake warehouse
- Modifies schemas, tables, or views
- Sends raw PII or credentials to LLMs
- Stores full table dumps — only metadata and aggregates
- Shares your data with other tenants
Credential storage
Snowflake credentials are stored in AWS Secrets Manager with AES-256 encryption. They are only accessed by Eru’s runtime layer (which executes queries) and never sent to the AI agent layer (which plans and reasons). See our security documentation for the full LLM data boundary design.
Frequently asked questions
Does Eru replace our dbt models?
No. Eru reads from your dbt models and treats them as input signals. Your staging models, intermediate transformations, and mart tables remain unchanged. Eru adds cross-system capabilities (entity resolution, billing reconciliation, real-time signal correlation) on top of your existing warehouse analytics.
How much Snowflake compute does Eru use?
Eru’s read queries are lightweight — primarily schema metadata queries, aggregations, and small sample queries. On a dedicated X-Small warehouse with AUTO_SUSPEND = 60, most teams see $50–$200/month in additional Snowflake compute. Creating a dedicated eru_wh warehouse makes these costs easy to track and cap.
Can Eru work without dbt?
Yes. Eru discovers and maps raw tables and views regardless of whether they were created by dbt. dbt models simply provide better-organised inputs because they follow consistent naming conventions and represent well-defined business entities.
What if our dbt models change?
Eru re-scans your schema periodically and detects new or modified models. If a mapped model is renamed or dropped, Eru flags the broken mapping and prompts you to update it. The GRANT SELECT ON FUTURE TABLES permission ensures Eru can access new models without manual grants.
Does Eru work with Snowflake on any cloud provider?
Yes. Eru supports Snowflake on AWS, Azure, and GCP. Enter your account identifier as provided by Snowflake (e.g., xy12345.us-east-1.aws, xy12345.west-us-2.azure, or xy12345.us-central1.gcp).
Related resources
Connect your Snowflake warehouse to Eru
Layer cross-system churn intelligence on your existing dbt models — first signals within days.
Book a Demo