Each agent maps to a phase of the Intent loop. They connect to MCP servers through scoped tool access, use model routing for cost efficiency, and never do each other's work. Knowledge operations shape every decision.
Compiles raw sources into structured knowledge artifacts. Reads immutable raw/ files, creates or updates personas, journeys, decisions, themes, domain models, and design rationale in knowledge/. Every ingest touches 10–15 artifacts. Maintains the master index, traceability matrix, and activity log.
Queries the compiled knowledge base and runs lint checks. Synthesizes answers with citations to knowledge artifacts and raw sources. Lint detects contradictions, orphans, stale claims, missing cross-refs, and coverage gaps. Each finding becomes a suggested signal for Notice.
Captures raw signals and computes trust scores. Use when noticing something worth tracking — decisions, risks, requirements, patterns, observations from conversations, code, or agent traces.
Every signal needs clear, specific content (not vague summaries), a source attribution, honest trust factor scoring, and a confidence assessment. A single meeting might produce 3–8 signals of different types.
Enriches signals: rescores trust, clusters related signals, manages amplification, and promotes clusters to intents. Use when signals need analysis, grouping, or elevation to problems worth solving.
Creates specifications and contracts from intents. Specs are contracts, not stories — precise enough that an AI agent can execute against them autonomously.
Every spec needs a problem statement grounded in signal evidence (cite SIG-NNN IDs), solution description, contracts (4 types), testable acceptance criteria, explicit out-of-scope boundaries, and test scenarios.
Verifies contracts against implementation. Runs verification commands, inspects outputs, records results. The quality gate between execution and completion.
Critical severity failures block completion. Major severity failures flag for review. Minor severity failures are noted but don't block.
Intents become agent-ready specs through four-persona interrogation. Each persona queries the knowledge base, checks existing decisions, and generates structured questions and assertions. Brien reviews specs, not execution.
Queries DDRs, domain models, rationale. Outputs boundaries, approach, key decisions.
Queries personas, journeys, themes. Outputs why it matters, behavioral change.
Queries DDR validation criteria, existing contracts. Outputs observable outcomes, verification commands.
Queries the spec itself + trust formula. Outputs trust score, ambiguity flags, recommended autonomy level.
Monitors the system, detects deltas, and closes the loop by suggesting new signals from event patterns. The critical feedback mechanism.
Watch for: repeated contract failures, unclustered signal backlogs, specs with no contract verifications, and trust boundary crossings.
The coordinator plans, delegates, and synthesizes. It never does the work itself. It routes to the right agent, in the right order, with the right model. Start every session by asking the observer for system_health to understand the current pipeline state.
Cost efficiency through intelligent model selection. Fast, cheap models for simple capture and queries. Reasoning models for synthesis, judgment, and compilation decisions.
| Agent | Model | Rationale |
|---|---|---|
| knowledge-compiler | Sonnet | Needs reasoning for cross-reference synthesis and compilation decisions |
| knowledge-querier | Haiku | Simple lookup and pattern matching for queries and lint |
| signal-capture | Haiku | Cheap, fast — simple capture and trust scoring |
| signal-enricher | Sonnet | Needs reasoning for clustering and promotion decisions |
| spec-writer | Sonnet | Needs precision for contract definition and completeness |
| contract-verifier | Sonnet | Needs judgment for verification result interpretation |
| observer | Sonnet | Needs pattern detection for delta and drift analysis |
| coordinator | Opus | Orchestration requires highest reasoning capability |