OTel-native distributed tracing for the Intent loop. Every signal, spec, contract, and event — queryable, visualizable, alertable.
Telemetry flows from four source types through a unified OTel Collector into purpose-built storage backends, all visualized in a single Grafana instance.
The File Tail Adapter is a lightweight Python process that watches events.jsonl and converts each line into an OTLP span. Claude Code exports directly via the OTLP gRPC endpoint, bypassing the file layer entirely for lower latency.
The Intent lifecycle maps directly to the OpenTelemetry trace model. Signals begin as orphan spans with no trace. When clustered and promoted to an Intent, the Intent's UUID becomes the trace_id, and all child artifacts inherit it.
Each artifact type acquires its trace identity at a specific lifecycle moment. Backfill ensures retroactive coherence.
| Moment | trace_id Behavior |
|---|---|
| Signal created | null — orphan observation, no trace yet |
| Signals clustered | Provisional cluster-{uuid} assigned as temporary trace_id |
| Cluster → Intent | Intent UUID becomes trace_id. All prior signals are backfilled with the new trace_id. |
| Spec under Intent | Inherits trace_id from parent Intent. parent_id = Intent span ID. |
| Contract under Spec | Inherits trace_id from parent Intent. parent_id = Spec span ID. |
The observability stack scales in three phases, from zero-cost local development to full multi-team infrastructure.
Start immediately. OTel Collector binary runs locally alongside a Python file-tail adapter. Telemetry ships to Grafana Cloud free tier — 50GB traces, 10k metrics series, 50GB logs.
otelcol-contrib binaryfile_tail_adapter.pyFull self-hosted stack on a single node. Unlimited retention, no vendor lock-in. All four backends (Tempo, Mimir, Loki, Grafana) run as containers with persistent volumes.
docker-compose.ymlKubernetes-native deployment for teams running multiple Intent instances. Kafka fan-out enables cross-repo tracing and multi-tenant dashboards without collector bottlenecks.
k3s + Helm chartsEvery metric follows the intent.* namespace convention. Counters track cumulative totals, gauges reflect current state, and histograms capture distribution over time.
| Metric | Description | Type |
|---|---|---|
| intent.signals.total | Cumulative count of signals created across all sources | Counter |
| intent.specs.total | Cumulative count of specs created from promoted intents | Counter |
| intent.contracts.total | Cumulative count of contract evaluations (pass + fail) | Counter |
| intent.events.total | Cumulative count of all events emitted to events.jsonl | Counter |
| intent.signals.active | Current number of open signals not yet clustered or archived | Gauge |
| intent.signals.trust_avg | Rolling average trust score across active signals (0.0 – 1.0) | Gauge |
| intent.pipeline.depth | Number of artifacts currently in-flight (signals + specs + contracts) | Gauge |
| intent.cycle_time.signal_to_intent | Time from signal creation to intent promotion | Histogram |
| intent.cycle_time.intent_to_spec | Time from intent promotion to first spec authored | Histogram |
| intent.cycle_time.spec_to_complete | Time from spec creation to all contracts passing | Histogram |
The default dashboard ships with every Intent installation. Three rows: stat counters, cycle-time and trust distribution panels, and a live event stream.
┌──────────┬──────────┬──────────┬────────────┐ │ Signals │ Intents │ Specs │ Contracts │ │ 24 │ 5 │ 3 │ 12✓ 1✗ │ ├──────────┴──────────┴──────────┴────────────┤ │ CYCLE TIME │ TRUST DISTRIBUTION │ │ Sig→Int: 2.1d │ L0 ██░░░░ 3 │ │ Int→Spec: 1.4d │ L2 █████░ 8 │ │ Spec→Done: 0.8d │ L4 ██░░░░ 2 │ ├───────────────────┴─────────────────────────┤ │ EVENT STREAM (live) │ │ 10:42 signal.created SIG-025 source=mcp │ │ 10:38 contract.passed CON-014 spec=SPEC-003│ └─────────────────────────────────────────────┘