FEATURES / ALERTS

Alerts that detect patterns, not just thresholds

Threshold-based alerts tell you a number crossed a line. Gitrevio alerts tell you a pattern is forming — a bottleneck emerging, an attrition risk shifting, a sprint going off-track — before the damage is done.

Built-in alert types

Attrition Risk Change
An engineer's attrition risk score shifted significantly. Multiple signals combined — not a single metric spike. You have time to act before it's a resignation letter.
Sprint at Risk
Delivery trajectory suggests the current sprint will miss its target. Detected mid-sprint, not at the retro. Includes the specific blockers causing the deviation.
Review Bottleneck
Review queue depth or wait time is forming a bottleneck that will impact delivery. Identifies the reviewer, the affected PRs, and suggests redistribution.
Knowledge Silo
A critical area of the codebase has concentrated ownership — bus factor approaching 1. Flagged before it becomes a crisis, with suggested knowledge transfer actions.
Unusual Activity Pattern
A contributor's work patterns deviated significantly from their baseline. Could signal burnout, context-switching overload, or a shift in responsibilities.
Quality Regression
Change-failure rate, test coverage, or code complexity trending in the wrong direction for a specific team or repo. Caught early, not after the incident.
Onboarding Stall
A new hire's ramp-up has plateaued or fallen behind your org's baseline curve. Early intervention can get them back on track.
Causal Shift Detected
Shapley attribution detects when the underlying causes of a metric change, not just the metric itself. Example: review time is the same, but the cause shifted from queue depth to reviewer expertise mismatch — a different problem requiring a different response.
Custom Alert
Define your own alert using natural language. 'Notify me if any team's cycle time exceeds 2x their trailing average.' Gitrevio figures out the rest.

What alerts look like in practice

Each alert includes context, cause, and a suggested action. Not a number — a story.

SPRINT AT RISK
Backend team — Sprint 44
Triggered: Wednesday 2:14pm
At current velocity, this sprint will deliver ~60%
of committed scope (30/50 SP).
Contributing factors:
- 3 PRs blocked in review for 48h+ (reviewer: Marcus)
- 2 tickets re-opened after QA — original estimates low
- Unplanned incident response consumed ~16h this week
Suggested: Redistribute Marcus's review queue to
Sarah and David. Descope BACK-347 to next sprint.
ATTRITION RISK CHANGE
Alex Chen — Platform team
Triggered: Thursday 9:30am
Risk score moved from Low (0.2) to Medium (0.5)
over the past 3 weeks.
Signals detected:
- Commit frequency down 40% from 90-day average
- Context switching increased — touching 3x more repos
- Review participation dropped from 8/week to 2/week
- No longer participating in architecture discussions
Consider scheduling a 1:1 focused on engagement
and career growth. Review workload distribution.

Pattern detection, not threshold math

Traditional alerts fire when a number crosses a line: "cycle time > 5 days." The problem? That threshold is arbitrary, context-free, and either too noisy (fires constantly) or too late (fires after the damage).

Gitrevio's AI learns your team's baselines, seasonal patterns, and normal variance. It alerts on meaningful deviations — the kind that experienced engineering leaders would notice, not the kind that a simple rule catches.

It also correlates across signals. An attrition risk alert isn't triggered by a single metric — it's the combination of declining commit frequency, reduced review participation, and increased context switching that forms the pattern.

Deliver anywhere, create anywhere

Alerts land where you'll see them: email, Slack (channel or DM), in-app notifications, or via MCP for agentic workflows.

Create alerts through the web UI, in AI chat, via the REST API, or through MCP. Each method is equally capable.

# Create an alert via AI chat
> Alert me in Slack if any team's
> review wait time exceeds 2x their
> 30-day average for more than 48h.
Done. I've created a review bottleneck
alert for all teams, delivered to your
Slack DM. Checking every 6 hours.

Manage alerts, not noise

We built Gitrevio alerts with alert fatigue in mind. Every design decision aims to reduce noise and increase signal.

Smart grouping
Related alerts are grouped into a single notification. If three teams have review bottlenecks for the same reason, you get one alert with three entries — not three interruptions.
Cooldown periods
An alert won't re-fire until the condition resolves and re-triggers. No repeated notifications for the same ongoing issue.
Severity levels
Critical alerts go to Slack DM and email immediately. Medium alerts go to a channel. Low alerts batch into a daily summary. You control the routing.

Get notified about what matters. Ignore the rest.

Get started free