FEATURES / PLAN VS REALITY

You planned 50 story points. You delivered 34. Where did 16 go?

Sprint planning happens on Monday. By Wednesday, unplanned work appears, priorities shift, people get pulled into incidents. By Friday, the plan bears little resemblance to reality. Most tools tell you the delivery percentage. Gitrevio tells you exactly where the drift happened and why.

The drift breakdown

# Sprint 42 — Plan vs Reality
Planned: 50 SP across 13 stories
Delivered: 34 SP across 9 stories
Delivery: 68%
WHERE THE 16 SP WENT:
Unplanned work pulled in: +18 SP
Auth hotfix (P0 incident) 8 SP
CEO demo prep 6 SP
Security patch 4 SP
Planned work dropped: -16 SP
Payment refactor (blocked) 8 SP → dependency on API team
Search upgrade (deprioritized) 5 SP → PM decision mid-sprint
Test migration (no capacity) 3 SP → Sarah pulled to auth fix
ESTIMATION ACCURACY:
Delivered stories avg error: +22% (estimated 3.4 SP, actual 4.1 SP)
Worst: "Update user settings" — planned 2 SP, actual 8 SP
PATTERN (last 6 sprints):
Sprint 37: 72% Sprint 39: 81% Sprint 41: 85%
Sprint 38: 68% Sprint 40: 77% Sprint 42: 68% ← regression
Root cause: 3 unplanned P0/P1 items (vs avg 0.8)

This isn't blame. This is understanding. Every sprint, Gitrevio reconstructs what actually happened compared to what was planned — automatically, without anyone filling out a form or updating a status.

You see exactly where capacity went. Unplanned work that got pulled in. Planned work that got dropped or blocked. Estimation errors that compounded. The full picture, not just a number.

The goal is to learn from every sprint and improve the next one's plan. When you can see that three P0 incidents consumed 18 story points, you stop blaming the team for missing the target and start fixing the incident rate.

What it tracks

Eight dimensions of sprint drift, measured continuously. Each one tells a different part of the story about why plans diverge from reality.

Scope creep
Stories added mid-sprint — who added them, why, and how much capacity they consumed
Blocked work
Stories that couldn't proceed — what blocked them, how long they waited, cascading effects
Estimation drift
Per-story estimation accuracy and systematic over/under patterns across the team
Unplanned work ratio
How much of actual work was unplanned, trending over time with alerting on spikes
Priority shifts
Stories deprioritized mid-sprint — by whom, why, and what replaced them
Carry-over patterns
Stories that roll from sprint to sprint repeatedly, signaling chronic planning issues
Individual calibration
Who estimates accurately, who consistently under- or overestimates, by how much
Cross-team dependencies
Work blocked by other teams — average wait time, worst offenders, recurring patterns

Lognormal estimation calibration

Your team consistently underestimates by 22%. That's not a moral failing — humans are systematically bad at estimation. We anchor on the best case, ignore the tail risks, and forget that software tasks follow a lognormal distribution, not a normal one.

Gitrevio fits a lognormal distribution to your historical data and tells you what the realistic timeline is. Not a gut-feel buffer. Not "multiply by two." A statistically grounded correction based on your team's actual delivery patterns.

This is why your sprints slip. Not because your team is slow — because estimation is fundamentally asymmetric. A task can take 2x longer than expected but can never take negative time. Gitrevio corrects for the skew.

# Story estimation calibration — Backend team
Your estimates follow a lognormal distribution:
μ = 1.2 (you estimate well for the median case)
σ = 0.8 (but your variance is high)
When you say "3 story points":
p50: 3.3 SP (actual median)
p75: 5.1 SP (1 in 4 chance of exceeding)
p90: 7.8 SP (1 in 10 chance of exceeding)
Recommendation: multiply estimates by 1.4x for planning
Or better: use Gitrevio's calibrated estimates automatically

Automated sprint autopsy

At the end of every sprint, Gitrevio generates a complete retrospective with root cause analysis. What drifted, why, what patterns are emerging, and what to change — all computed from actual data, not memory and opinion.

Delivered to Slack or email before your retro meeting starts. Your team walks in already aligned on what happened. The retro becomes about deciding what to do, not reconstructing what went wrong.

Trends over time

The real value isn't any single sprint — it's the trend. Are you getting more predictable? Is unplanned work decreasing? Are estimates improving? Are cross-team dependencies getting resolved faster?

Gitrevio tracks all of this and alerts you when patterns change. A single bad sprint is noise. Three sprints of declining predictability is a signal. Gitrevio knows the difference and tells you when it matters.

Stop guessing why sprints miss. Start measuring.

Get started free