Signal Stream

Signals are atomic observations captured from anywhere work happens. Error logs, team conversations, user complaints, build failures, design reviews — anything that makes you say "we should look at that" is a signal. They flow through a lifecycle, cluster into patterns, and promote into intent.

Intent Signals 43
Active 43
Promoted 0
Clusters 8

Where Signals Come From

Human Sources
💬 Conversation
🧠 Team thought
📝 Design review
🎯 Retro insight
📧 User complaint
🗳️ Submitted issue
System Sources
🔴 Error log
🏗️ Build failure
📊 Metric anomaly
⚠️ Alert fired
🔀 PR review
🤖 Agent trace
Capture Surfaces
⌨️ CLI
🔌 MCP server
# Slack
🐙 GitHub
🔗 AI plugin
S
Signal
Capture
Clusters form
Frequency grows
Patterns emerge
Intent clarifies

How Patterns Emerge

Think of signals like error codes. One error in a month is noise. The same error 12,000 times in a day is an emergency. Signals work the same way — frequency, co-occurrence, and cross-referencing reveal what matters.

1. Scatter

Signals arrive from everywhere. A developer mentions friction in Slack. An error log spikes. A user files the same bug for the third time. A design review surfaces a gap nobody planned for.

SIG-008: Signals die in context switches SIG-011: Multi-surface capture needed SIG-014: Agent context drift
2. Cluster

Unrelated signals start referencing each other. Three signals from different sources all point at the same underlying friction. The system groups them — or a human notices the pattern. Either way, the cluster becomes visible.

signal-capture-surfaces (4 signals) work-ontology-design (3 signals) autonomous-infrastructure (2 signals)
3. Emerge

A cluster with enough weight becomes a candidate intent — a real problem worth solving. The signal amplification score factors in frequency, recency, and cross-referencing to surface what's actually urgent, not just what's loud.

Cluster → Intent → Spec → Execution Like PageRank for work priorities
Captured
Active
Clustered
Promoted

Autonomy Levels (Trust-Based)

L0: Human Drives
Trust < 0.2 — No autonomous action
0
L1: Agent Assists
Trust 0.2–0.4 — Suggests actions
0
L2: Agent Decides, Human Approves
Trust 0.4–0.6 — Proposes, awaits approval
0
L3: Agent Executes, Human Monitors
Trust 0.6–0.85 — Acts with oversight
0
L4: Full Autonomy
Trust ≥ 0.85 — No human intervention
0