---
title: "AI Agent Monitoring"
description: "Learn how to manually instrument AI agents in browser applications."
url: https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser/
---

# Browser AI Monitoring | Sentry for Elysia

With [Sentry AI Agent Monitoring](https://docs.sentry.io/ai/monitoring/agents/dashboards.md), you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.

## [Prerequisites](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#prerequisites)

Before setting up AI Agent Monitoring, ensure you have [tracing enabled](https://docs.sentry.io/platforms/javascript/guides/elysia/tracing.md) in your Sentry configuration.

**Browser applications require manual instrumentation.** Unlike Node.js applications, the JavaScript SDK does not provide automatic instrumentation for AI libraries in the browser.

## [Using Integration Helpers](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#using-integration-helpers)

For supported AI libraries, Sentry provides manual instrumentation helpers that simplify span creation. These helpers handle the complexity of creating properly structured spans with the correct attributes.

**Supported libraries:**

* [OpenAI](https://docs.sentry.io/platforms/javascript/guides/elysia/configuration/integrations/openai.md)
* [Anthropic](https://docs.sentry.io/platforms/javascript/guides/elysia/configuration/integrations/anthropic.md)
* [Google Gen AI SDK](https://docs.sentry.io/platforms/javascript/guides/elysia/configuration/integrations/google-genai.md)
* [LangChain](https://docs.sentry.io/platforms/javascript/guides/elysia/configuration/integrations/langchain.md)
* [LangGraph](https://docs.sentry.io/platforms/javascript/guides/elysia/configuration/integrations/langgraph.md)

Each integration page includes a manual-instrumentation example with options like `recordInputs` and `recordOutputs`.

```javascript
import * as Sentry from "___SDK_PACKAGE___";
import OpenAI from "openai";

const client = Sentry.instrumentOpenAiClient(
  new OpenAI({ apiKey: "...", dangerouslyAllowBrowser: true }),
  {
    recordInputs: true,
    recordOutputs: true,
  },
);

// All calls are now instrumented
const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});
```

## [Manual Span Creation](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#manual-span-creation)

If you're using a library that Sentry doesn't provide helpers for, you can manually create spans. For your data to show up in the [AI Agents Dashboards](https://sentry.io/orgredirect/organizations/:orgslug/dashboards/?filter=onlyPrebuilt\&query=agents\&sort=mostPopular), spans must have well-defined names and data attributes.

### [Invoke Agent Span](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#invoke-agent-span)

This span represents the execution of an AI agent, capturing the full lifecycle from receiving a task to producing a final response.

**Key attributes:**

* `gen_ai.agent.name` — The agent's name (e.g., "Weather Agent")
* `gen_ai.request.model` — The underlying model used
* `gen_ai.output.messages` — The agent's final output
* `gen_ai.usage.input_tokens` / `output_tokens` — Total token counts

```javascript
// Example agent implementation for demonstration
const myAgent = {
  name: "Weather Agent",
  modelProvider: "openai",
  model: "gpt-4o-mini",
  async run() {
    // Agent implementation
    return {
      output: "The weather in Paris is sunny",
      usage: {
        inputTokens: 15,
        outputTokens: 8,
      },
    };
  },
};

Sentry.startSpan(
  {
    op: "gen_ai.invoke_agent",
    name: `invoke_agent ${myAgent.name}`,
    attributes: {
      "gen_ai.operation.name": "invoke_agent",
      "gen_ai.request.model": myAgent.model,
      "gen_ai.agent.name": myAgent.name,
    },
  },
  async (span) => {
    // run the agent
    const result = await myAgent.run();

    // set agent response
    span.setAttribute(
      "gen_ai.output.messages",
      JSON.stringify([
        {
          role: "assistant",
          parts: [{ type: "text", content: result.output }],
        },
      ]),
    );

    // set token usage
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );

    return result;
  },
);
```

All Invoke Agent span attributes

Describes AI agent invocation.

* The span `op` MUST be `"gen_ai.invoke_agent"`.
* The span `name` SHOULD be `"invoke_agent {gen_ai.agent.name}"`.
* The `gen_ai.operation.name` attribute MUST be `"invoke_agent"`.
* The `gen_ai.agent.name` attribute SHOULD be set to the agent's name. (e.g. `"Weather Agent"`)
* If relevant, `gen_ai.pipeline.name` SHOULD be set to the name of the AI workflow or pipeline the agent belongs to.
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

### [Request Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#request-attributes)

| Data Attribute                   | Type   | Requirement Level | Description                                                                                                     | Example                                                               |
| -------------------------------- | ------ | ----------------- | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| `gen_ai.input.messages`          | string | optional          | List of message objects given to the agent. **\[0]**, **\[1]**                                                  | `'[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]'` |
| `gen_ai.tool.definitions`        | string | optional          | List of objects describing the available tools. **\[0]**                                                        | `'[{"name": "random_number", "description": "..."}]'`                 |
| `gen_ai.system_instructions`     | string | optional          | The system instructions passed to the model.                                                                    | `"You are a helpful assistant."`                                      |
| `gen_ai.pipeline.name`           | string | optional          | The name of the AI workflow or pipeline the agent belongs to.                                                   | `"weather-pipeline"`                                                  |
| `gen_ai.request.messages`        | string | optional          | **Deprecated.** Use `gen_ai.input.messages` instead. List of message objects given to the agent. **\[0]**       | `'[{"role": "system", "content": "..."}]'`                            |
| `gen_ai.request.available_tools` | string | optional          | **Deprecated.** Use `gen_ai.tool.definitions` instead. List of objects describing the available tools. **\[0]** | `'[{"name": "random_number", "description": "..."}]'`                 |

### [Response Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#response-attributes)

| Data Attribute               | Type   | Requirement Level | Description                                                                                            | Example                                                                      |
| ---------------------------- | ------ | ----------------- | ------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------- |
| `gen_ai.output.messages`     | string | optional          | Stringified array of message objects representing the agent's output. **\[0]**, **\[1]**               | `'[{"role": "assistant", "parts": [{"type": "text", "content": "..."}]}]'`   |
| `gen_ai.response.text`       | string | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The text representation of the agent's response. | `"The weather in Paris is rainy"`                                            |
| `gen_ai.response.tool_calls` | string | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The tool calls in the model's response. **\[0]** | `'[{"name": "random_number", "type": "function_call", "arguments": "..."}]'` |

### [Token Usage](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#token-usage)

| Data Attribute                          | Type | Requirement Level | Description                                                                           | Example |
| --------------------------------------- | ---- | ----------------- | ------------------------------------------------------------------------------------- | ------- |
| `gen_ai.usage.input_tokens`             | int  | optional          | The number of tokens used in the AI input (prompt), including cached tokens. **\[2]** | `60`    |
| `gen_ai.usage.input_tokens.cached`      | int  | optional          | The number of cached tokens used in the AI input (prompt).                            | `50`    |
| `gen_ai.usage.input_tokens.cache_write` | int  | optional          | Tokens written to cache when processing input.                                        | `20`    |
| `gen_ai.usage.output_tokens`            | int  | optional          | The number of tokens used in the AI output, including reasoning tokens. **\[3]**      | `130`   |
| `gen_ai.usage.output_tokens.reasoning`  | int  | optional          | The number of tokens used for reasoning.                                              | `30`    |
| `gen_ai.usage.total_tokens`             | int  | optional          | The sum of `gen_ai.usage.input_tokens` and `gen_ai.usage.output_tokens`.              | `190`   |

### [Cost](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#cost)

| Data Attribute              | Type   | Requirement Level | Description                                       | Example |
| --------------------------- | ------ | ----------------- | ------------------------------------------------- | ------- |
| `gen_ai.cost.input_tokens`  | double | optional          | Cost of input tokens in USD (without cached).     | `0.005` |
| `gen_ai.cost.output_tokens` | double | optional          | Cost of output tokens in USD (without reasoning). | `0.015` |
| `gen_ai.cost.total_tokens`  | double | optional          | Total cost for tokens used.                       | `0.020` |

* **\[0]:** Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `'[{"foo": "bar"}]'` (must be parsable JSON).
* **\[1]:** Messages use the format `{role, parts}` where `parts` is an array of typed objects: `[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]`. The `role` must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For backwards compatibility, the legacy format `{role, content}` is also accepted.
* **\[2]:** Cached tokens are a subset of input tokens; `gen_ai.usage.input_tokens` includes `gen_ai.usage.input_tokens.cached`.
* **\[3]:** Reasoning tokens are a subset of output tokens; `gen_ai.usage.output_tokens` includes `gen_ai.usage.output_tokens.reasoning`.

### [AI Client Span](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#ai-client-span)

This span represents a chat or completion request to an LLM, capturing the messages, model configuration, and response.

**Key attributes:**

* `gen_ai.request.model` — The model name (required)
* `gen_ai.input.messages` — Chat messages sent to the LLM
* `gen_ai.request.max_tokens` — Token limit for the response
* `gen_ai.output.messages` — The model's response

```javascript
// Example AI implementation for demonstration
const myAi = {
  modelProvider: "openai",
  model: "gpt-4o-mini",
  modelConfig: {
    temperature: 0.1,
    presencePenalty: 0.5,
  },
  async createMessage(messages, maxTokens) {
    // AI implementation
    return {
      output:
        "Here's a joke: Why don't scientists trust atoms? Because they make up everything!",
      usage: {
        inputTokens: 12,
        outputTokens: 24,
      },
    };
  },
};

Sentry.startSpan(
  {
    op: "gen_ai.chat",
    name: `chat ${myAi.model}`,
    attributes: {
      "gen_ai.operation.name": "chat",
      "gen_ai.request.model": myAi.model,
    },
  },
  async (span) => {
    // set up messages for LLM
    const maxTokens = 1024;
    const messages = [
      {
        role: "user",
        parts: [{ type: "text", content: "Tell me a joke" }],
      },
    ];

    // set chat request data
    span.setAttribute("gen_ai.input.messages", JSON.stringify(messages));
    span.setAttribute("gen_ai.request.max_tokens", maxTokens);
    span.setAttribute(
      "gen_ai.request.temperature",
      myAi.modelConfig.temperature,
    );

    // ask the LLM
    const result = await myAi.createMessage(messages, maxTokens);

    // set response
    span.setAttribute(
      "gen_ai.output.messages",
      JSON.stringify([
        {
          role: "assistant",
          parts: [{ type: "text", content: result.output }],
        },
      ]),
    );

    // set token usage
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );

    return result;
  },
);
```

All AI Client span attributes

* The span `op` MUST be `"gen_ai.{gen_ai.operation.name}"`. (e.g. `"gen_ai.chat"`)
* The span `name` SHOULD be `"{gen_ai.operation.name} {gen_ai.request.model}"`. (e.g. `"chat o3-mini"`)
* The `gen_ai.request.model` attribute MUST be the requested model. (e.g. `"o3-mini"`)
* The `gen_ai.response.model` attribute MUST be the concrete model that responded. (e.g. `"gpt-4o-2024-08-06"`)
* If the request originates from an agent, `gen_ai.agent.name` SHOULD be set to the agent's name. (e.g. `"Weather Agent"`)
* If relevant, `gen_ai.pipeline.name` SHOULD be set to the name of the AI workflow or pipeline. (e.g. `"weather-pipeline"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

### [Request Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#request-attributes)

| Data Attribute                     | Type   | Requirement Level | Description                                                                                                     | Example                                                               |
| ---------------------------------- | ------ | ----------------- | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| `gen_ai.input.messages`            | string | optional          | List of message objects sent to the LLM. **\[0]**, **\[1]**                                                     | `'[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]'` |
| `gen_ai.tool.definitions`          | string | optional          | List of objects describing the available tools. **\[0]**                                                        | `'[{"name": "random_number", "description": "..."}]'`                 |
| `gen_ai.system_instructions`       | string | optional          | The system instructions passed to the model.                                                                    | `"You are a helpful assistant."`                                      |
| `gen_ai.request.frequency_penalty` | float  | optional          | Model configuration parameter.                                                                                  | `0.5`                                                                 |
| `gen_ai.request.max_tokens`        | int    | optional          | Model configuration parameter.                                                                                  | `500`                                                                 |
| `gen_ai.request.seed`              | string | optional          | Seed for reproducible outputs.                                                                                  | `"12345"`                                                             |
| `gen_ai.request.temperature`       | float  | optional          | Model configuration parameter.                                                                                  | `0.1`                                                                 |
| `gen_ai.request.top_k`             | int    | optional          | Limits model to K most likely next tokens.                                                                      | `40`                                                                  |
| `gen_ai.request.top_p`             | float  | optional          | Model configuration parameter.                                                                                  | `0.7`                                                                 |
| `gen_ai.request.presence_penalty`  | float  | optional          | Model configuration parameter.                                                                                  | `0.5`                                                                 |
| `gen_ai.request.messages`          | string | optional          | **Deprecated.** Use `gen_ai.input.messages` instead. List of message objects sent to the LLM. **\[0]**          | `'[{"role": "system", "content": "..."}]'`                            |
| `gen_ai.request.available_tools`   | string | optional          | **Deprecated.** Use `gen_ai.tool.definitions` instead. List of objects describing the available tools. **\[0]** | `'[{"name": "random_number", "description": "..."}]'`                 |

### [Response Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#response-attributes)

| Data Attribute                        | Type    | Requirement Level | Description                                                                                             | Example                                                                      |
| ------------------------------------- | ------- | ----------------- | ------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| `gen_ai.response.model`               | string  | required          | The concrete model that responded (may differ from `gen_ai.request.model`).                             | `"gpt-4o-2024-08-06"`                                                        |
| `gen_ai.output.messages`              | string  | optional          | Stringified array of message objects representing the model's output. **\[0]**, **\[1]**                | `'[{"role": "assistant", "parts": [{"type": "text", "content": "..."}]}]'`   |
| `gen_ai.response.finish_reasons`      | string  | optional          | Stringified array of reasons the model stopped generating. **\[0]**                                     | `'["stop"]'`                                                                 |
| `gen_ai.response.id`                  | string  | optional          | Unique identifier for the completion.                                                                   | `"chatcmpl-abc123"`                                                          |
| `gen_ai.response.streaming`           | boolean | optional          | Whether the response was streamed.                                                                      | `true`                                                                       |
| `gen_ai.response.time_to_first_token` | double  | optional          | Seconds until first response chunk in streaming.                                                        | `0.5`                                                                        |
| `gen_ai.response.tokens_per_second`   | double  | optional          | Output tokens per second throughput.                                                                    | `50.0`                                                                       |
| `gen_ai.response.text`                | string  | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The text representation of the model's responses. | `"The weather in Paris is rainy"`                                            |
| `gen_ai.response.tool_calls`          | string  | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The tool calls in the model's response. **\[0]**  | `'[{"name": "random_number", "type": "function_call", "arguments": "..."}]'` |

### [Token Usage](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#token-usage)

| Data Attribute                          | Type | Requirement Level | Description                                                                           | Example |
| --------------------------------------- | ---- | ----------------- | ------------------------------------------------------------------------------------- | ------- |
| `gen_ai.usage.input_tokens`             | int  | optional          | The number of tokens used in the AI input (prompt), including cached tokens. **\[2]** | `60`    |
| `gen_ai.usage.input_tokens.cached`      | int  | optional          | The number of cached tokens used in the AI input (prompt).                            | `50`    |
| `gen_ai.usage.input_tokens.cache_write` | int  | optional          | Tokens written to cache when processing input.                                        | `20`    |
| `gen_ai.usage.output_tokens`            | int  | optional          | The number of tokens used in the AI output, including reasoning tokens. **\[3]**      | `130`   |
| `gen_ai.usage.output_tokens.reasoning`  | int  | optional          | The number of tokens used for reasoning.                                              | `30`    |
| `gen_ai.usage.total_tokens`             | int  | optional          | The sum of `gen_ai.usage.input_tokens` and `gen_ai.usage.output_tokens`.              | `190`   |

### [Cost](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#cost)

| Data Attribute              | Type   | Requirement Level | Description                                       | Example |
| --------------------------- | ------ | ----------------- | ------------------------------------------------- | ------- |
| `gen_ai.cost.input_tokens`  | double | optional          | Cost of input tokens in USD (without cached).     | `0.005` |
| `gen_ai.cost.output_tokens` | double | optional          | Cost of output tokens in USD (without reasoning). | `0.015` |
| `gen_ai.cost.total_tokens`  | double | optional          | Total cost for tokens used.                       | `0.020` |

* **\[0]:** Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `'[{"foo": "bar"}]'` (must be parsable JSON).
* **\[1]:** Messages use the format `{role, parts}` where `parts` is an array of typed objects: `[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]`. The `role` must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For backwards compatibility, the legacy format `{role, content}` is also accepted.
* **\[2]:** Cached tokens are a subset of input tokens; `gen_ai.usage.input_tokens` includes `gen_ai.usage.input_tokens.cached`.
* **\[3]:** Reasoning tokens are a subset of output tokens; `gen_ai.usage.output_tokens` includes `gen_ai.usage.output_tokens.reasoning`.

### [Execute Tool Span](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#execute-tool-span)

This span represents the execution of a tool or function that was requested by an AI model, including the input arguments and resulting output.

**Key attributes:**

* `gen_ai.tool.name` — The tool's name (e.g., "random\_number")
* `gen_ai.tool.description` — Description of what the tool does
* `gen_ai.tool.call.arguments` — The arguments passed to the tool
* `gen_ai.tool.call.result` — The tool's return value

```javascript
// Example AI implementation for demonstration
const myAi = {
  modelProvider: "openai",
  model: "gpt-4o-mini",
  async createMessage(messages, maxTokens) {
    // AI implementation that returns tool calls
    return {
      toolCalls: [
        {
          name: "random_number",
          description: "Generate a random number",
          arguments: { max: 10 },
        },
      ],
    };
  },
};

const messages = [
  { role: "user", content: "Generate a random number between 0 and 10" },
];

// First, make the AI call
const result = await Sentry.startSpan(
  { op: "gen_ai.chat", name: `chat ${myAi.model}` },
  () => myAi.createMessage(messages, 1024),
);

// Check if we should call a tool
if (result.toolCalls && result.toolCalls.length > 0) {
  const tool = result.toolCalls[0];

  await Sentry.startSpan(
    {
      op: "gen_ai.execute_tool",
      name: `execute_tool ${tool.name}`,
      attributes: {
        "gen_ai.operation.name": "execute_tool",
        "gen_ai.tool.type": "function",
        "gen_ai.tool.name": tool.name,
        "gen_ai.tool.description": tool.description,
        "gen_ai.tool.call.arguments": JSON.stringify(tool.arguments),
      },
    },
    async (span) => {
      // run tool (example implementation)
      const toolResult = Math.floor(Math.random() * tool.arguments.max);

      // set tool result
      span.setAttribute("gen_ai.tool.call.result", String(toolResult));

      return toolResult;
    },
  );
}
```

All Execute Tool span attributes

Describes a tool execution.

* The span `op` MUST be `"gen_ai.execute_tool"`.
* The span `name` SHOULD be `"execute_tool {gen_ai.tool.name}"`. (e.g. `"execute_tool query_database"`)
* The `gen_ai.tool.name` attribute SHOULD be set to the name of the tool. (e.g. `"query_database"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

| Data Attribute               | Type   | Requirement Level | Description                                                                                           | Example                                    |
| ---------------------------- | ------ | ----------------- | ----------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| `gen_ai.tool.name`           | string | optional          | Name of the tool executed.                                                                            | `"random_number"`                          |
| `gen_ai.tool.call.arguments` | string | optional          | Arguments of the tool call (stringified JSON).                                                        | `"{\"max\":10}"`                           |
| `gen_ai.tool.call.result`    | string | optional          | Result of the tool call (stringified).                                                                | `"7"`                                      |
| `gen_ai.tool.description`    | string | optional          | Description of the tool executed.                                                                     | `"Tool returning a random number"`         |
| `gen_ai.tool.type`           | string | optional          | The type of the tools.                                                                                | `"function"`; `"extension"`; `"datastore"` |
| `gen_ai.tool.input`          | string | optional          | **Deprecated.** Use `gen_ai.tool.call.arguments` instead. Input given to the executed tool as string. | `"{\"max\":10}"`                           |
| `gen_ai.tool.output`         | string | optional          | **Deprecated.** Use `gen_ai.tool.call.result` instead. The output from the tool.                      | `"7"`                                      |

### [Handoff Span](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#handoff-span)

This span marks the transition of control from one agent to another, typically when the current agent determines another agent is better suited to handle the task.

**Requirements:**

* `op` must be `"gen_ai.handoff"`
* `name` should follow the pattern `"handoff from {source} to {target}"`
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#common-span-attributes) should be set

The handoff span itself has no body — it just marks the transition point before the target agent starts.

```javascript
// Example agent implementations for demonstration
const myAgent = {
  name: "Weather Agent",
  modelProvider: "openai",
  model: "gpt-4o-mini",
  async run() {
    // Agent implementation
    return {
      handoffTo: "Travel Agent",
      output:
        "I need to handoff to the travel agent for booking recommendations",
    };
  },
};

const otherAgent = {
  name: "Travel Agent",
  modelProvider: "openai",
  model: "gpt-4o-mini",
  async run() {
    // Other agent implementation
    return { output: "Here are some travel recommendations..." };
  },
};

// First agent execution
const result = await Sentry.startSpan(
  { op: "gen_ai.invoke_agent", name: `invoke_agent ${myAgent.name}` },
  () => myAgent.run(),
);

// Check if we should handoff to another agent
if (result.handoffTo) {
  // Create handoff span
  await Sentry.startSpan(
    {
      op: "gen_ai.handoff",
      name: `handoff from ${myAgent.name} to ${otherAgent.name}`,
    },
    () => {
      // the handoff span just marks the handoff
    },
  );

  // Execute the other agent
  await Sentry.startSpan(
    { op: "gen_ai.invoke_agent", name: `invoke_agent ${otherAgent.name}` },
    () => otherAgent.run(),
  );
}
```

## [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/elysia/ai-agent-monitoring-browser.md#common-span-attributes)

Some attributes are common to all AI Agents spans:

| Data Attribute          | Type   | Requirement Level | Description                                                                      | Example    |
| ----------------------- | ------ | ----------------- | -------------------------------------------------------------------------------- | ---------- |
| `gen_ai.operation.name` | string | required          | The name of the operation being performed. **\[4]**                              | `"chat"`   |
| `gen_ai.provider.name`  | string | optional          | The Generative AI product as identified by the client or server instrumentation. | `"openai"` |

* **\[4]:** `gen_ai.operation.name` is what Sentry uses to classify spans in AI dashboards. Well-defined values include: `"chat"`, `"invoke_agent"`, `"execute_tool"`, `"embeddings"`, `"generate_content"`, `"text_completion"`, `"create_agent"`, `"handoff"`.

Well-defined values for `gen_ai.provider.name`: `"anthropic"`, `"aws.bedrock"`, `"azure.ai.inference"`, `"azure.ai.openai"`, `"cohere"`, `"deepseek"`, `"gcp.gemini"`, `"gcp.gen_ai"`, `"gcp.vertex_ai"`, `"groq"`, `"ibm.watsonx.ai"`, `"mistral_ai"`, `"openai"`, `"perplexity"`, `"x_ai"`.
