Learn how to manually instrument your code to use Sentry's Agents module.
With Sentry AI Agent Monitoring, you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.
As a prerequisite to setting up AI Agent Monitoring with JavaScript, you'll need to first set up tracing. Once this is done, the JavaScript SDK will automatically instrument AI agents created with supported libraries. If that doesn't fit your use case, you can use custom instrumentation described below.
The JavaScript SDK supports automatic instrumentation for some AI libraries. We recommend adding their integrations to your Sentry configuration to automatically capture spans for AI agents.
If you're using a library that Sentry does not automatically instrument, you can manually instrument your code to capture spans. For your AI agents data to show up in the Sentry AI Agents Insights, two spans must be created and have well-defined names and data attributes. See below.
The text representation of the model's responses. [0]
"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"
gen_ai.usage.input_tokens.cached
int
optional
The number of cached tokens used in the AI input (prompt)
50
gen_ai.usage.input_tokens
int
optional
The number of tokens used in the AI input (prompt).
10
gen_ai.usage.output_tokens.reasoning
int
optional
The number of tokens used for reasoning.
30
gen_ai.usage.output_tokens
int
optional
The number of tokens used in the AI response.
100
gen_ai.usage.total_tokens
int
optional
The total number of tokens used to process the prompt. (input and output)
190
[0]: Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set [{"foo": "bar"}] but rather the string "[{\"foo\": \"bar\"}]".
[1]: Each message item uses the format {role:"", content:""}. The role can be "user", "assistant", or "system". The content can be either a string or a list of dictionaries.
const messages =[{role:"user",content:"Tell me a joke"}];awaitSentry.startSpan({op:"gen_ai.request",name:"request o3-mini",attributes:{"gen_ai.request.model":"o3-mini","gen_ai.request.messages":JSON.stringify(messages),},},async(span)=>{// Call your LLM hereconst result =await client.chat.completions.create({model:"o3-mini", messages,}); span.setAttribute("gen_ai.response.text",JSON.stringify([result.choices[0].message.content]),);// Set token usage span.setAttribute("gen_ai.usage.input_tokens", result.usage.prompt_tokens,); span.setAttribute("gen_ai.usage.output_tokens", result.usage.completion_tokens,);},);
The text representation of the model’s responses. [0]
"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"
gen_ai.usage.input_tokens.cached
int
optional
The number of cached tokens used in the AI input (prompt)
50
gen_ai.usage.input_tokens
int
optional
The number of tokens used in the AI input (prompt).
10
gen_ai.usage.output_tokens.reasoning
int
optional
The number of tokens used for reasoning.
30
gen_ai.usage.output_tokens
int
optional
The number of tokens used in the AI response.
100
gen_ai.usage.total_tokens
int
optional
The total number of tokens used to process the prompt. (input and output)
190
[0]: Span attributes only allow primitive data types (like int, float, boolean, string). This means you need to use a stringified version of a list of dictionaries. Do NOT set [{"foo": "bar"}] but rather the string "[{\"foo\": \"bar\"}]".
[1]: Each message item uses the format {role:"", content:""}. The role can be "user", "assistant", or "system". The content can be either a string or a list of dictionaries.
awaitSentry.startSpan({op:"gen_ai.invoke_agent",name:"invoke_agent Weather Agent",attributes:{"gen_ai.request.model":"o3-mini","gen_ai.agent.name":"Weather Agent",},},async(span)=>{// Run the agentconst result =await myAgent.run(); span.setAttribute("gen_ai.response.text",JSON.stringify([result.output]),);// Set token usage span.setAttribute("gen_ai.usage.input_tokens", result.usage.inputTokens,); span.setAttribute("gen_ai.usage.output_tokens", result.usage.outputTokens,);},);
awaitSentry.startSpan({op:"gen_ai.execute_tool",name:"execute_tool get_weather",attributes:{"gen_ai.tool.name":"get_weather","gen_ai.tool.input":JSON.stringify({location:"Paris"}),},},async(span)=>{// Call the toolconst result =awaitgetWeather({location:"Paris"}); span.setAttribute("gen_ai.tool.output",JSON.stringify(result));},);
This span marks the transition of control from one agent to another, typically when the current agent determines another agent is better suited to handle the task.
Handoff span attributes
A span that describes the handoff from one agent to another.
The spans op MUST be "gen_ai.handoff".
The spans name SHOULD be "handoff from {from_agent} to {to_agent}".
awaitSentry.startSpan({op:"gen_ai.handoff",name:"handoff from Weather Agent to Travel Agent",},()=>{},// Handoff span just marks the transition);awaitSentry.startSpan({op:"gen_ai.invoke_agent",name:"invoke_agent Travel Agent"},async()=>{// Run the target agent here},);
Help improve this content Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").