SRE Incident Environment

SRE Command Console

Select a difficulty tier, pick a scenario family, inspect alerts, rollout state, and trace clues, then compare your response against the seeded benchmark.

Tasks 0
Session Idle
Runtime ● Ready

Mode Select

Choose Difficulty

0 scenarios

Scenario List

Easy

Selected Scenario

Waiting for tasks

Choose a scenario from the selected tier to begin.

Investigation now matters: versions and dependencies start hidden, metrics must be inspected directly, and hard mode requires an explicit recovery ping before final diagnosis.

Tier -
Max Steps -
Task ID -

Live State

Service Topology

Start a session to see active alerts.
Incident ticket and operator notes appear here.
Lifecycle stage appears here.
Business impact appears here.
Traffic control status appears here.
Queue depth appears here.
Feature flag state appears here.
Regional status appears here.
Telemetry warnings appear here.
Service ownership contacts appear here.
Relevant runbook hints appear here.
Deploy and change history appears here.
Config findings appear here.
Validation status appears here after investigation or remediation.
Recent change and deployment events appear here.
Rollout and canary state appears here.
Trace clues appear here after investigation.
No services loaded.

Operator Controls

Action Composer

Suggested flow: inspect metrics or logs, discover dependencies, apply a fix only after evidence, then validate recovery before submitting diagnosis.

Queue

Queued Actions

No actions queued.

AI Runtime

Provider Control Center

Configure provider credentials and model settings in the browser, run AI baselines or full benchmarks, and compare the configured agent against the current human session without changing the parent console flow.

Config

Provider Settings

Selected Task none
Tier -
Session Seed 0
Current Human Session none
Saved runtime settings will appear here.

Actions

Run AI Workflows

The AI runtime page sends provider configuration per request so the current console and existing scripted/human flows remain unchanged.

Provider validation messages appear here.

Results

AI Baseline Output

Run an AI baseline on the selected task to see results here.

Results

AI Benchmark Output

Run a benchmark to compare the configured AI across all scenarios.

Comparison

Human vs AI

Load a human session and compare it against the configured AI runtime.

Activity

Runtime Notes

Saved runtime actions and recent AI runs appear here.