AI Agents Dashboard

Learn how to use Sentry's AI Agents Dashboard.

Once you've configured the Sentry SDK for your AI agent project, you'll start receiving data in the Sentry AI Agents Insights dashboard.

The main dashboard provides a comprehensive view of all your AI agent activities, performance metrics, and recent executions.

AI Agents Monitoring Overview

The dashboard displays key widgets like:

  • Traffic: Shows agent runs over time, error rates, and releases to track overall activity and health
  • Duration: Displays response times for your agent executions to monitor performance
  • Recommended Issues: Highlights recent errors and problems that need attention, including agent failures and exceptions
  • LLM Generations: Shows the number of language model calls with breakdowns by specific models (claude, 4o-mini, etc.)
  • Tool Usage: Shows which tools your agents use most frequently
  • Token Usage: Tracks token consumption over time with breakdown by model

Underneath these widgets are tables that allow you to view data in more detail:

  • Traces: Recent agent runs with duration, errors, number of LLM and tool calls and token usage
  • Models: Traffic, duration, token usage and errors grouped by model
  • Tools: Number of requests and their usual durations grouped by tool

AI Agent Trace Table

Click on any trace to open the abbreviated trace view in a drawer.

Opens as a drawer when clicking any trace, showing essential details:

AI Agent Abbreviated Trace View

  • Agent Invocations: Each agent execution and nested calls
  • LLM Generations: Language model interactions with token breakdown
  • Tool Calls: External API calls with inputs and outputs
  • Handoffs: Agent-to-agent transitions and human handoffs
  • Critical Timing: Duration metrics for each step
  • Errors: Any failures that occurred

Click "View in full trace" for comprehensive debugging details.

Shows complete agent workflow with full context:

AI Agent Detailed Trace View

This detailed view reveals:

  • Complete Agent Flow: Every step from initial request to final response
  • Tool Calls: When and how the agent used external tools or APIs
  • Model Interactions: All LLM calls with prompts and responses (if PII is enabled)
  • Timing Breakdown: Duration of each step in the agent workflow
  • Error Context: Detailed information about any failures or issues

When your AI agents are part of larger applications (like web servers or APIs), the trace view will include context from other Sentry integrations, giving you a complete picture of how your agents fit into your overall application architecture.

Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").