Skip to main content
Metrics provides Agent observability for Cloud Agents and automated workflows. Agent observability answers a simple question: What are my AI agents doing, and is it working? Use Metrics to monitor agent activity, understand human intervention, measure success rates, and evaluate the cost and impact of AI-driven work across your repositories.

What Metrics Show About Your Cloud Agents

Continue’s Metrics give you operational observability for AI agents, similar to how traditional observability tools provide visibility into services, jobs, and pipelines. Instead of logs and latency, agent observability focuses on:
  • Runs and execution frequency
  • Success vs. human intervention
  • Pull request outcomes
  • Cost per run and per workflow
Understand when and how often your agents run.
  • See which Cloud Agents are running most often
  • Spot spikes, trends, or recurring failures
  • Monitor automated Workflows in production
Measure whether agents produce usable results.
  • Total runs
  • PR creation rate
  • PR status (open, merged, closed, failed)
  • Success vs. intervention rate
Evaluate automated agent workflows in production.
  • Which Workflows generate the most work
  • Completion and success rates
  • Signals that a Workflow needs refinement or guardrails

Why Metrics Matter

Improve Agent Reliability

Identify which Agents need better rules, tools, or prompts.

Measure Automation Value

See how much work your automated Workflows are completing across your repos.

Sharing Metrics

Share a snapshot of your metrics with teammates or stakeholders who don’t have Continue access. Click Share on the Metrics page to generate a unique URL. The link:
  • Captures a point-in-time snapshot of your current metrics view
  • Works without authentication (safe for external sharing)
  • Expires after 30 days by default
  • Shows summary stats, activity grid, and charts in read-only mode
Shared links expose metrics data to anyone with the URL. Only share with people you trust.