OBSERVE INFRASTRUCTURE

Observability Stack

OTel-native distributed tracing for the Intent loop. Every signal, spec, contract, and event — queryable, visualizable, alertable.

Architecture

End-to-End Pipeline

Telemetry flows from four source types through a unified OTel Collector into purpose-built storage backends, all visualized in a single Grafana instance.

graph TB subgraph Sources["TELEMETRY SOURCES"] CLI["CLI Tools"] MCP["MCP Servers"] GHA["GitHub Actions"] CC["Claude Code"] JSONL["events.jsonl"] end subgraph Collect["OTEL COLLECTOR"] OTLP["OTLP Receiver<br/>gRPC :4317"] PROC["Processors<br/>batch · attributes"] end subgraph Store["STORAGE BACKENDS"] TEMPO["Tempo<br/>Traces"] MIMIR["Mimir<br/>Metrics"] LOKI["Loki<br/>Logs"] end subgraph Dash["VISUALIZATION"] GRAF["Grafana<br/>Intent Observe"] end CLI --> JSONL MCP --> JSONL GHA --> JSONL JSONL -->|"File Tail Adapter"| OTLP CC -->|"Direct Export"| OTLP OTLP --> PROC PROC --> TEMPO PROC --> MIMIR PROC --> LOKI TEMPO --> GRAF MIMIR --> GRAF LOKI --> GRAF style Sources fill:#1a1a2e,stroke:#f59e0b,stroke-width:2px style Collect fill:#1a1a2e,stroke:#3b82f6,stroke-width:2px style Store fill:#1a1a2e,stroke:#10b981,stroke-width:2px style Dash fill:#1a1a2e,stroke:#8b5cf6,stroke-width:2px

The File Tail Adapter is a lightweight Python process that watches events.jsonl and converts each line into an OTLP span. Claude Code exports directly via the OTLP gRPC endpoint, bypassing the file layer entirely for lower latency.

Trace Identity Model
An Intent is a Trace. Everything under it is a Span.

The Intent lifecycle maps directly to the OpenTelemetry trace model. Signals begin as orphan spans with no trace. When clustered and promoted to an Intent, the Intent's UUID becomes the trace_id, and all child artifacts inherit it.

graph LR SIG1["SIG-006<br/>Signal"] --> CLUSTER["Cluster"] SIG2["SIG-008<br/>Signal"] --> CLUSTER CLUSTER -->|promote| INT["INT-003<br/>Intent"] INT --> SPEC["SPEC-004<br/>Spec"] SPEC --> CON1["CON-012 ✓"] SPEC --> CON2["CON-013 ✗"] style SIG1 fill:#1e293b,stroke:#f59e0b style SIG2 fill:#1e293b,stroke:#f59e0b style INT fill:#1e293b,stroke:#3b82f6 style SPEC fill:#1e293b,stroke:#3b82f6 style CON1 fill:#1e293b,stroke:#10b981 style CON2 fill:#1e293b,stroke:#ef4444

Trace Assignment Rules

Each artifact type acquires its trace identity at a specific lifecycle moment. Backfill ensures retroactive coherence.

Moment trace_id Behavior
Signal created null — orphan observation, no trace yet
Signals clustered Provisional cluster-{uuid} assigned as temporary trace_id
Cluster → Intent Intent UUID becomes trace_id. All prior signals are backfilled with the new trace_id.
Spec under Intent Inherits trace_id from parent Intent. parent_id = Intent span ID.
Contract under Spec Inherits trace_id from parent Intent. parent_id = Spec span ID.
Deployment Phases

Progressive Rollout

The observability stack scales in three phases, from zero-cost local development to full multi-team infrastructure.

Phase 1

Grafana Cloud

$0

Start immediately. OTel Collector binary runs locally alongside a Python file-tail adapter. Telemetry ships to Grafana Cloud free tier — 50GB traces, 10k metrics series, 50GB logs.

otelcol-contrib binary
file_tail_adapter.py
Grafana Cloud free tier
Phase 2

Docker Compose

~$5/month VPS

Full self-hosted stack on a single node. Unlimited retention, no vendor lock-in. All four backends (Tempo, Mimir, Loki, Grafana) run as containers with persistent volumes.

docker-compose.yml
Tempo + Mimir + Loki + Grafana
Persistent volumes for retention
Phase 3

k3s

Multi-team

Kubernetes-native deployment for teams running multiple Intent instances. Kafka fan-out enables cross-repo tracing and multi-tenant dashboards without collector bottlenecks.

k3s + Helm charts
Kafka fan-out for multi-repo
Cross-repo trace correlation
Metrics Model

Instrumentation Schema

Every metric follows the intent.* namespace convention. Counters track cumulative totals, gauges reflect current state, and histograms capture distribution over time.

Metric Description Type
intent.signals.total Cumulative count of signals created across all sources Counter
intent.specs.total Cumulative count of specs created from promoted intents Counter
intent.contracts.total Cumulative count of contract evaluations (pass + fail) Counter
intent.events.total Cumulative count of all events emitted to events.jsonl Counter
intent.signals.active Current number of open signals not yet clustered or archived Gauge
intent.signals.trust_avg Rolling average trust score across active signals (0.0 – 1.0) Gauge
intent.pipeline.depth Number of artifacts currently in-flight (signals + specs + contracts) Gauge
intent.cycle_time.signal_to_intent Time from signal creation to intent promotion Histogram
intent.cycle_time.intent_to_spec Time from intent promotion to first spec authored Histogram
intent.cycle_time.spec_to_complete Time from spec creation to all contracts passing Histogram
Dashboard Preview

Intent Observe — Grafana Dashboard

The default dashboard ships with every Intent installation. Three rows: stat counters, cycle-time and trust distribution panels, and a live event stream.

Intent Observe — Live
┌──────────┬──────────┬──────────┬────────────┐
│ Signals  │ Intents  │ Specs    │ Contracts  │
│   24     │    5     │    3     │  12✓  1✗   │
├──────────┴──────────┴──────────┴────────────┤
│ CYCLE TIME        │ TRUST DISTRIBUTION      │
│ Sig→Int:  2.1d    │ L0 ██░░░░ 3            │
│ Int→Spec: 1.4d    │ L2 █████░ 8            │
│ Spec→Done: 0.8d   │ L4 ██░░░░ 2            │
├───────────────────┴─────────────────────────┤
│ EVENT STREAM (live)                         │
│ 10:42 signal.created  SIG-025 source=mcp   │
│ 10:38 contract.passed CON-014 spec=SPEC-003│
└─────────────────────────────────────────────┘
Related Pages
View Mermaid source →