All posts
Opinion7 min read

Leading vs lagging indicators in SaaS: what to track when revenue is too slow to teach you anything

TL;DR

Revenue is a lagging indicator that reflects decisions you made 3–6 months ago; early-stage SaaS founders who treat MRR as their primary feedback signal are flying blind, and the fix is identifying 2–3 leading indicators — specific behaviours that predict retention — and tracking those weekly instead.

Key takeaways

  • Revenue lags by 3–6 months in most B2B SaaS: by the time churn appears in your MRR, the product decisions that caused it are long in the past and hard to untangle from everything else you changed.
  • Find your leading indicators by looking at what retained users did in their first 14 days that churned users didn’t — this is a cohort analysis you can run in Mixpanel, Amplitude, or even a Postgres query without a data team.
  • Activation rate (the percentage of new signups who complete a defined ‘aha moment’ action within 7 days) is one of the highest-signal leading indicators available to early-stage products — a 10-point improvement in activation typically translates to meaningfully better 90-day retention.
  • Treat leading indicators as hypotheses, not facts: if weekly active usage is up but 90-day retention doesn’t follow within two cohorts, your leading indicator is wrong and needs recalibration.
  • The practical workflow: track 2–3 leading indicators weekly, review lagging indicators monthly, and never make product priority decisions based on a lagging metric that hasn’t moved yet.

Your MRR chart is a history book, not a dashboard

Most early-stage SaaS founders check their MRR every week. Some check it every day. They watch the number tick up, celebrate each new customer, and treat a flat or declining month as a signal to act.

The problem is that MRR tells you what happened three to six months ago. Every subscription you’re collecting revenue from today represents a buying decision made weeks or months back, based on a product experience that may have already changed. Every cancellation showing up in your churn rate this month is a retention failure that was baked in during onboarding — when the user didn’t hit a meaningful outcome in their first two weeks and quietly stopped logging in.

By the time churn appears in your lagging metrics, the causal window has closed. You’ve shipped several releases since then. You’ve changed your onboarding flow, your pricing page, maybe your positioning. Attributing today’s churn to a specific product decision is archaeology, not analysis.

This is not an argument against tracking revenue. Revenue is the score. The argument is against using it as your primary feedback loop when you’re moving fast enough that the feedback needs to arrive in days, not months.

What lagging indicators are actually measuring

A lagging indicator is a metric that confirms a trend after it has happened. Revenue, churn rate, NPS, and annual contract value are all lagging. They’re accurate, they’re meaningful, and they’re always looking backwards.

The academic framing here comes from the balanced scorecard literature (Kaplan and Norton, “The Balanced Scorecard — Measures that Drive Performance”, Harvard Business Review, January–February 1992), which was originally built for manufacturing but maps directly onto SaaS. Their core insight was that financial results are outcomes of decisions made in prior periods — and that optimising for financial metrics alone produces short-term behaviour that destroys long-term performance.

In SaaS, this plays out as: founders over-invest in acquisition because it produces immediate MRR signal, while under-investing in activation and retention because the payoff is invisible for 60–120 days.

A common pattern in early-stage SaaS: significant effort goes into driving signups — writing, outreach, SEO — while MRR grows slowly enough to be difficult to interpret. It’s not until segmenting by activation behaviour (users who completed a full workflow in the first session vs. those who didn’t) that the retention split becomes clear. Users who activate in session one typically retain at roughly three times the rate of those who don’t. That signal is invisible in the MRR chart. It’s obvious in the cohort breakdown.

Causal Chain to MRR Drop

Product DecisionDay 0
Activation FailureDay 3
Silent ChurnDay 30
MRR Drop VisibleDay 60–90

How to find your actual leading indicators

A leading indicator is a behaviour that predicts a future outcome with enough lead time to act on it. The definition that matters in practice: a metric that, if you improve it this week, will show up in retention two to four weeks from now.

The standard approach for finding them is a retained-vs-churned cohort comparison. Take your 90-day retained users and your 90-day churned users. Find the behaviours in the first 14 days that differ most between the two groups. Those are your candidate leading indicators.

In Amplitude, this is a retention analysis broken down by first-session event completion. In Mixpanel, it’s a funnel report filtered to cohorts. In Postgres, it’s a join between your events table and a retention flag computed from last-seen dates. None of this requires a data team — it requires clean event tracking and a few hours.

Common leading indicators that actually predict retention across B2B SaaS:

Activation rate — the percentage of new signups who complete a specific ‘aha moment’ action within 7 days. The aha moment varies by product. For Slack, it was sending 2,000 messages as a team. For Dropbox, it was adding a file to a shared folder. For a B2B analytics tool, it might be creating and saving a first dashboard. The key is that it’s a specific, observable action — not “engagement” or “logged in twice.”

Feature adoption depth — how many distinct feature areas a user touches in their first 30 days. Single-feature users churn faster across almost every SaaS product I’ve seen benchmarked. Breadth of adoption signals integration into workflow.

Time-to-value — how long it takes a new user to get their first meaningful output from the product. Shorter is almost always better, but the useful leading indicator is the percentage of users who hit value within a defined window (e.g., within 48 hours of signup).

Weekly active usage in the first month — not just whether a user logs in, but whether they return in week 2 and week 3. Users who are active in three of their first four weeks retain at dramatically higher rates than those who use the product once and come back a month later.

Leading vs Lagging Indicators

CriteriaRevenueChurn RateActivation RateWeekly Active Use
Feedback delay3–6 months1–3 monthsDaysDays
ActionabilityWeakWeakStrongStrong
Predicts retentionNoNoYesYes

The honest problem with leading indicators: they’re hypotheses

The strongest objection to building your decision-making around leading indicators is that they’re correlations, not causation, and they can mislead you badly if you treat them as facts.

Here’s a real failure mode: you identify that users who create three or more projects in their first week retain better at 90 days. You redesign onboarding to push users toward creating multiple projects immediately. You see your “activation” metric go up. Then 90-day retention for those cohorts comes in flat or worse.

What happened? The leading indicator was measuring a symptom of engaged users, not causing retention. Users who naturally created three projects were more motivated to begin with. Forcing project creation during onboarding didn’t replicate the motivation — it just added friction for users who weren’t ready.

This is Goodhart’s Law in product form: when a measure becomes a target, it ceases to be a good measure.

The way to avoid this is to treat every leading indicator as a hypothesis with a falsification condition. “If I improve activation rate by 10 points for the next two cohorts, I expect 90-day retention to improve by at least 5 points.” If it doesn’t, the indicator is wrong and needs replacement. You’re not looking for metrics that feel good to track. You’re looking for metrics that predict the thing you care about and remain predictive after you start optimising for them.

The practical workflow: weekly leading, monthly lagging

The system that actually works for early-stage SaaS is simple but requires discipline to maintain.

Pick two or three leading indicators based on your retained-vs-churned cohort analysis. Track them weekly. Make product priority decisions based on them — if activation rate is dropping, that’s a higher-priority signal than a flat MRR week.

Review lagging indicators monthly. Use them to validate or invalidate your leading indicator hypotheses. If you’ve been improving activation rate for three months and 90-day retention hasn’t moved, you’re either measuring activation wrong or activation isn’t the bottleneck.

Never make a product decision based solely on a lagging metric that hasn’t moved yet. “MRR is flat so we need to do something” is not a diagnosis — it’s panic expressed as urgency. The useful question is always: which leading indicator explains the lagging result, and what’s the earliest point in the funnel where we can see the problem?

Revenue is the outcome you’re building toward. But in the early stage, when you’re making product decisions weekly and shipping fast enough that the causal chain gets tangled, MRR tells you almost nothing about whether what you’re doing right now is working. The leading indicators tell you that. And you need to know it before the lagging metrics confirm it, because by then it’s already history.

Product, measurement, and decision quality