All posts
Opinion7 min read

Designing reports that force decisions: how to structure outputs so "next action" is unavoidable

TL;DR

A report earns its existence only when the reader cannot finish it without knowing the one thing they must do next — structure everything else around surfacing that action.

Key takeaways

  • Put the recommended action in the first 3 sentences, not the conclusion — if the reader needs to scroll to find it, the structure has failed.
  • Every metric in the report should be tied to a threshold: 'conversion is 2.1%' means nothing; '2.1% against a 3.0% target with a 6-week trend downward' forces a decision.
  • Kill the executive summary that restates the methodology — replace it with a single sentence that names the decision the data demands.
  • Limit each report to one primary recommended action; reports with three equally-weighted recommendations produce zero decisions.
  • If your report ends with 'we'll monitor this closely', the report has not done its job — reframe the data until a concrete action becomes unavoidable.

Most reports exist to justify the meeting, not to change anything

The average product analytics report ends the same way regardless of what the data shows: a restatement of findings, a note that "we'll continue to monitor", and a slide deck that gets archived within a week.

This is not a data quality problem. The underlying numbers are often perfectly accurate. It's a structural problem — the report was designed to demonstrate that measurement happened, not to produce a decision.

My thesis is blunt: a report that doesn't make "what do we do next?" obvious has failed at its only job. Accuracy, visualisation quality, and analytical rigour are all table stakes. The question is whether the person reading finishes with their hand forced — whether the structure makes inaction uncomfortable.

Most don't. And the reason is almost always that the report was structured to present information rather than to compel action.

The pattern shows up consistently: good data, no decision

I've worked through this cycle across enough products to recognise the shape of it. A team runs a careful A/B test for 3 weeks. The result is statistically significant — p < 0.01, 15% lift in activation. The report is thorough, well-visualised, covers segments and edge cases. It concludes with "results are promising; recommend considering a full rollout."

That phrase — "recommend considering" — is the tell. It's decision-avoidance dressed as analytical caution.

The reader should finish that report with one forced binary: ship it or don't. Every sentence in the report exists to build the case for one of those two outcomes. If the wording allows a third option — the "let's discuss further" option — the structure has given people an escape hatch.

The same pattern appears in monthly business reviews. A deck with 22 slides, eight of which show metrics moving in the wrong direction, and a recommendation slide that says "optimise the onboarding funnel." That's not a recommendation. That's a category. The decision buried inside it — specifically, remove steps 3 and 4 of the onboarding sequence because 67% of users abandon at step 3 — is somewhere in slide 11, if anyone reads that far.

Data collected
Analysis
Report structure
Action-forcing
Reader actsAction-forcing
Reader defersInformation-only

Three structural changes that make inaction harder

1. Front-load the recommended action, not the methodology.

The instinct to open with context — "in this period we measured X using Y methodology across Z cohorts" — optimises for credibility, not decisions. Readers who need to make a call don't need to establish their trust in the methodology before seeing the conclusion. Put the recommended action in the first paragraph. One sentence. Then spend the rest of the report justifying it.

Tools like Metabase and Looker let you pin text blocks to the top of dashboards. Use that space for "Based on this week's data, we should [specific action] because [specific threshold was crossed]." Not a summary. Not context. The action.

2. Attach a threshold to every metric.

A metric without a threshold is a data point. It becomes a decision input only when it has a target, a historical baseline, or a stated acceptable range. "Churn is 4.2%" asks nothing of the reader. "Churn is 4.2% against a 3.5% target — this is the third consecutive month above threshold" makes inaction require a deliberate choice to tolerate the miss.

Instrument your reports with explicit threshold lines on every chart. Label them. When a metric crosses a threshold, the report should say so in plain language in the title of the section, not in a footnote.

3. Limit the report to one primary recommended action.

Reports that surface three equally-weighted recommendations produce, on average, zero decisions. The reader either escalates to a committee or defers. One recommendation forces engagement: agree with it, or make an explicit case against it.

If the data genuinely supports multiple actions, rank them. "Primary recommendation: pause campaign B immediately. Secondary: review targeting parameters in campaign A after the pause." The reader can argue with the ranking. They cannot ignore it.

CriteriaInfo-presentation reportAction-forcing report
Opening paragraphContext + methodologyRecommended action
Metric presentationRaw numbersMetric vs threshold
Recommendation formatConsider exploringDo X by date Y
Reader exit stateNo decisionForced decision

The objection about nuance — and why it's usually wrong

The standard pushback is that forcing a single action oversimplifies genuinely complex situations. Some decisions require context. Some data is ambiguous. Demanding a single recommended action when the evidence is mixed is, the argument goes, intellectually dishonest.

This is a real concern in about 20% of cases. When the data is genuinely ambiguous — when confidence intervals overlap, when different segments tell contradictory stories — it's honest to say so. But "state the ambiguity clearly and recommend the next step for resolving it" is still an action-forcing structure. "We need 2 more weeks of data on segment B before we can confidently recommend rollout" forces a decision: approve the 2-week extension, or proceed without the data. That's better than "results are mixed; further investigation warranted."

The remaining 80% of the time, the "it's complex" objection is a proxy for something else: the report author is uncertain about organisational reception, or they're not confident enough in their interpretation to commit. Those are legitimate feelings. They don't belong in the report structure — they belong in a direct conversation before the report is published.

Reports that hedge because the author is anxious train readers to ignore recommendations. Every "consider exploring" and "may warrant further investigation" degrades the signal quality of the reports that follow.

The one-sentence test for every report you publish

Before you send the report, apply this test: finish the sentence "After reading this, the reader must ___."

If you can't complete that sentence with a specific, bounded action — not a category of activity, not a vague directive, but a concrete next step that a person can either take or explicitly refuse — the report isn't ready.

This sounds simple. It isn't. It requires the report author to commit, on the record, to a position. That commitment is the point. Reports that don't commit don't change behaviour. Reports that do commit create accountability — for the recommendation and for the decision that follows from it.

The discomfort of writing "we should shut down this feature by end of quarter" instead of "results suggest a strategic review may be appropriate" is exactly the discomfort that produces useful reporting. Optimise for that discomfort. Make the next action unavoidable.

Product, measurement, and decision quality