Set Up
Learn how to set up Sentry AI Agent Monitoring
Sentry AI Agent Monitoring helps you track and debug AI agent applications using our supported SDKs and integrations. Monitor your complete agent workflows from user interaction to final response, including tool calls, model interactions, and custom logic.
To start sending AI agent data to Sentry, make sure you've created a Sentry project for your AI-enabled repository and follow one of the guides below:
The Sentry JavaScript SDK supports AI agent monitoring through the Vercel AI integration, which works with Node.js and Bun runtimes. This integration automatically captures spans for your AI agent workflows using the AI SDK's built-in telemetry.
Node.js
Next.js
SvelteKit
Nuxt
Astro
Remix
SolidStart
Express
Fastify
Nest.js
Hapi
Koa
Connect
Hono
Bun
AWS Lambda
Azure Functions
Google Cloud Functions
Electron
For Next.js applications, the Vercel AI integration can only be added to sentry.server.config.ts
(server-side configuration). It is not supported in edge configurations (sentry.edge.config.ts
) or browser configurations (instrumentation-client.ts
).
import { Sentry } from "@sentry/node";
// Sentry init needs to be above everything else
Sentry.init({
tracesSampleRate: 1.0,
integrations: [
Sentry.vercelAIIntegration({
recordInputs: true,
recordOutputs: true,
}),
],
});
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
// Your AI agent function
async function aiAgent(userQuery) {
const result = await generateText({
model: openai("gpt-4o"),
prompt: userQuery,
experimental_telemetry: {
isEnabled: true,
functionId: "ai-agent-main",
},
});
return result.text;
}
The Sentry Python SDK supports OpenAI Agents SDK.
import sentry_sdk
from sentry_sdk.integrations.openai_agents import OpenAIAgentsIntegration
import agents
from pydantic import BaseModel
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0,
send_default_pii=True, # Include LLM inputs/outputs
integrations=[
OpenAIAgentsIntegration(),
],
)
# Create your AI agent
my_agent = agents.Agent(
name="My Agent",
instructions="You are a helpful assistant.",
model="gpt-4o-mini",
)
# Your AI agent function
result = await agents.Runner.run(
my_agent,
input=user_query,
)
Don't see your platform?
We'll be adding AI agent integrations continuously. You can also instrument AI agents manually by following our manual instrumentation guide.
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").