Here's what happens when a single signal triggers the entire loop — from the moment it's noticed to the new signals it produces.
This walkthrough traces a real signal — SIG-010, "Engineer's team rewired around AI: tickets became bot specs, refinement became design sessions" — through the complete Intent loop. Every ID, timestamp, and event line below comes from Intent's own .intent/ directory. This is the methodology dogfooding itself.
The signal was captured during a conversation with an engineer named Ari, whose team had organically evolved their workflow when AI agents started handling implementation. What they discovered independently is the same pattern Intent formalizes: when implementation collapses, the bottleneck moves upstream to specification clarity.
An engineer named Ari shared how his team's workflow had organically transformed once AI agents started doing implementation. Tickets stopped being task descriptions for humans and became specifications written for bots. Refinement meetings stopped being estimation sessions and became collaborative design sessions where the team shaped what to build next.
This observation was captured as a signal with high confidence (0.9) because it came from a direct first-person account — not a blog post or a theory, but someone describing what actually happened on their team.
Four signals converged into a single cluster. The pattern: teams that adopt AI agents don't just get faster — they undergo a structural shift where the atomic unit of work changes from a labor task to a specification artifact. Tickets become bot specs. Estimation becomes design. Standups become signal reviews.
The cluster carries more weight than any individual signal because it represents convergent evidence from multiple observations. Where SIG-010 was one engineer's story, the cluster is a structural pattern visible across different vantage points.
A cluster is an observation about observations — a meta-signal that says "these things are connected." But it still does not say what to do. The promotion step is where the system (or the human operating it) makes a decision: this pattern is important enough to act on.
Promotion criteria are deliberately simple in Intent's current phase. The question is not "is this statistically significant?" but rather "does this pattern, if true, change what we should build next?" SIG-010's cluster cleared that bar easily. If teams are organically rewiring around specifications, then Intent's spec product needs to be real tooling, not just methodology documentation.
The trust score at this stage is still low (0.15) because the evidence is qualitative — conversations, not metrics. The autonomy level stays at L0: a human must make the promotion decision. In a mature deployment with quantitative signal sources (OTel traces, PR analytics, deployment metrics), this could reach L2 or L3 and the system could auto-promote with human approval.
The cluster crossed the promotion threshold. The system created a formal intent — a named, trackable unit of strategic direction. INT-003 declares that Intent's spec product must evolve from methodology documentation into active tooling: templates, validators, and agent-readable formats that make specification the natural workflow, not an overhead.
An intent is not a ticket. It does not have story points, an assignee column, or a sprint deadline. It has signals (evidence), a direction (what needs to change), and a trace ID that links every downstream action back to the original observations.
The intent captures the "what needs to change." The spec captures the "what to build, how to verify it." This is the most important transition in the loop — the moment where strategic direction becomes executable specification.
In traditional agile, this is where a product manager writes user stories and an architect writes technical specs. In Intent, the spec is a single artifact that serves both humans and agents. It contains a narrative (why this matters, what it connects to), acceptance criteria (verifiable conditions), and contracts (binary pass/fail assertions an agent can run).
For INT-003, the spec needed to define what "spec tooling" actually means in concrete terms. Not "build a tool" but "create a CLI command intent-spec that generates a markdown file with YAML frontmatter, links to the parent intent, includes acceptance criteria from a template, and emits a spec.created event to the event log." Every clause is verifiable. An agent can run each assertion and return pass or fail.
The spec was authored against the intent's direction. It defines a CLI tool (intent-spec) that creates spec files in .intent/specs/, links them to parent intents via frontmatter, generates acceptance criteria from configurable templates, and emits structured events to events.jsonl.
The spec includes three contracts: (1) running intent-spec "Test" produces a valid markdown file with required frontmatter fields, (2) the generated file references its parent intent ID, and (3) a spec.created event appears in the event log within one second of creation. Each contract is a binary assertion — pass or fail, no ambiguity.
This is where the loop crosses from human territory into agent territory. Everything before this point was authored by a person: noticing the signal, judging the cluster, promoting the intent, shaping the spec. From here forward, an AI agent takes over.
The spec is the contract between humans and agents. Humans define what "done" looks like. Agents figure out how to get there. This division is deliberate — it puts humans in charge of judgment (what matters, what's correct, what's safe) and agents in charge of labor (creating files, running tests, emitting events).
In Intent's current dogfood deployment, the agent is Claude Code running in a terminal session. It receives the spec, reads the contracts, and begins execution. The agent does not need to understand why this work matters — it needs to understand what the contracts require and produce outputs that pass them. The "why" lives in the spec's narrative section, which humans read during review.
Claude Code received the spec and began executing against its contracts. The agent created the bin/intent-spec CLI script (following the established pattern from bin/intent-signal and bin/intent-intent), added the spec template to .intent/templates/, and updated the MCP server to include a corresponding intent_create_spec tool.
Execution took 3 minutes and 44 seconds. The agent made six tool invocations: three file creates (CLI script, template, MCP tool definition), two file edits (server.py to register the tool, CLAUDE.md to document the new command), and one git commit bundling the changes.
All three contracts passed on first execution. The CLI produces valid frontmatter, references the parent intent, and emits the event. No human intervention was needed during execution — the spec was clear enough for the agent to work autonomously.
The contracts passed. The work is "done" in the traditional sense — the feature exists, the tests pass, the code is committed. But Intent does not stop here. The Observe phase asks: what did we learn from doing this work?
Observation is not a retrospective. It is not a ceremony scheduled for the end of a sprint. It is a continuous layer that runs alongside and after execution, capturing insights that become input to the next Notice cycle. What worked? What was surprising? What new questions emerged?
In this case, the observation layer noticed two things. First, the agent completed execution faster than expected because the CLI pattern was already established — the existing intent-signal and intent-intent tools served as architectural precedent. This is a signal about the value of consistent patterns: each new tool in the same family is cheaper than the last. Second, the spec's narrative section was never read by the agent — it only consumed the contracts. This raises a question: is the narrative section purely for human reviewers, or should agents use it for disambiguation when contracts are ambiguous?
The observation layer captured two insights from the INT-003 execution cycle. These are not conclusions — they are new signals that enter the Notice layer and begin their own journey through the loop.
Observation 1: Architectural precedent accelerates agent execution. When the CLI pattern (find root, generate ID, write frontmatter, emit event) was already established by two prior tools, the agent completed the third tool in under 4 minutes with zero contract failures. Consistent architecture is not just good engineering — it's an agent force multiplier.
Observation 2: Spec narrative is human-only context. The agent consumed only the contracts section of the spec. The narrative (connecting the work to Ari's pattern and the ceremony wall signal) was ignored during execution. This suggests the spec has two audiences with different needs: humans who need the "why" and agents who need the "what."
The two observations from INT-003 now become input to the next Notice cycle. Observation 1 (architectural precedent as force multiplier) feeds into how the team thinks about structuring future specs — should there be a "pattern library" that agents reference? Observation 2 (spec narrative as human-only context) raises a design question about the spec schema itself — should the narrative section be explicitly marked as audience: human?
Neither observation needs to be acted on immediately. They enter the signal pool, get scored for confidence and trust, and wait to be clustered with other signals that point in the same direction. The loop does not demand urgency — it demands attention. Signals that matter will accumulate evidence. Signals that don't will fade naturally.
This is the fundamental difference between Intent and ceremony-driven methodologies. There is no sprint boundary forcing a decision. There is no backlog grooming session where someone must prioritize these observations against unrelated work. The observations exist in the signal stream, carrying their own context, ready to participate in whatever pattern emerges next.
The two observations from the INT-003 execution cycle are now live signals in the .intent/signals/ directory. They carry full provenance: which intent produced them, which spec was executed, which contracts passed, and what the observation layer noticed. The loop is closed.
But "closed" does not mean "finished." The observations from this cycle become the raw material for the next cluster, the next intent, the next spec. Ari's team rewiring around AI led to spec tooling, which led to observations about agent patterns and spec architecture, which will lead to whatever comes next. The loop is continuous. There is no "done" — only "done for now, and here's what we learned."
This is the actual event stream recorded in .intent/events/events.jsonl for the signals involved in this walkthrough:
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T18:01:38Z","trace_id":null,"span_id":"SIG-010","parent_id":null,"source":"github-action","data":{"title":"Engineer's team rewired around AI: tickets became bot specs, refinement became design sessions","file":".intent/signals/2026-03-29-ari-pattern-tickets-as-bot-specs.md"}}
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T18:01:38Z","trace_id":null,"span_id":"SIG-007","parent_id":null,"source":"github-action","data":{"title":"Teams using AI agents hit a ceremony wall around sprint 3","file":".intent/signals/2026-03-29-ceremony-wall-sprint-3.md"}}
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T18:01:38Z","trace_id":null,"span_id":"SIG-009","parent_id":null,"source":"github-action","data":{"title":"Intent is four products (Notice, Spec, Execute, Observe), not one — each needs its own roadmap","file":".intent/signals/2026-03-29-four-products-not-one.md"}}
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T18:01:38Z","trace_id":null,"span_id":"SIG-008","parent_id":null,"source":"github-action","data":{"title":"Signals die in the gap between where they're noticed and where the system can see them","file":".intent/signals/2026-03-29-signals-die-in-context-switch.md"}}
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T22:08:48Z","trace_id":null,"span_id":"SIG-010","parent_id":null,"source":"github-action","data":{"title":"Engineer's team rewired around AI: tickets became bot specs, refinement became design sessions","file":".intent/signals/2026-03-29-ari-pattern-tickets-as-bot-specs.md"}}
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T22:08:48Z","trace_id":null,"span_id":"SIG-012","parent_id":null,"source":"github-action","data":{"title":"Autonomous signal processing with trust-based execution levels","file":".intent/signals/2026-03-29-autonomous-signal-processing-trust-levels.md"}}
{"version":"0.1.0","event":"signal.created","timestamp":"2026-03-29T22:19:05Z","trace_id":null,"span_id":"SIG-014","parent_id":null,"source":"github-action","data":{"title":"Agent context limits cause content drift during multi-file pushes","file":".intent/signals/2026-03-29-agent-context-limits-cause-content-drift.md"}}
Events are stored as newline-delimited JSON, one per line. Each event is a trace span: timestamp, span_id (SIG-xxx, INT-xxx), event type, and structured data. The trace_id field links events to their parent intent once promoted — these signals are pre-promotion, so trace_id is null.
Want to understand this loop better?
From Signal to Cluster
Once the engineer shared the observation about ticket patterns, the system's Notice layer was active. But a single signal is not enough to act on — it's too noisy, too specific to one team's experience. The value of a signal depends on whether it participates in a pattern. This is where clustering happens.
The system looked at other signals captured in the same time window and asked: is there a pattern here? Three other signals had been captured that day — SIG-007 about teams hitting a "ceremony wall" around sprint 3, SIG-008 about signals dying in context switches, and SIG-009 about Intent being four products rather than one. All of them pointed toward the same underlying dynamic: teams reorganizing themselves around AI, and existing processes failing to keep up.
Clustering does not require all signals to say the same thing. It requires them to rhyme — to orbit the same underlying tension. Ari's team rewiring tickets (SIG-010) and teams hitting a ceremony wall (SIG-007) are different observations, but they share root cause: when AI collapses implementation time, the ceremonies designed to manage human labor become bottlenecks.
See Methodology: Signal Clustering →