OpenAI

Adds instrumentation for OpenAI API.

Import name: Sentry.openAIIntegration

The openAIIntegration adds instrumentation for the openai API to capture spans by automatically wrapping OpenAI client calls and recording LLM interactions with configurable input/output recording.

This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the OpenAI client:

Copied
import * as Sentry from "@sentry/nextjs";
import OpenAI from "openai";

const openai = new OpenAI();
const client = Sentry.instrumentOpenAiClient(openai, {
  recordInputs: true,
  recordOutputs: true,
});

// Use the wrapped client instead of the original openai instance
const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

Type: boolean

Records inputs to OpenAI API method calls (such as prompts and messages).

Defaults to true if sendDefaultPii is true.

Copied
Sentry.init({
  integrations: [Sentry.openAIIntegration({ recordInputs: true })],
});

Type: boolean

Records outputs from OpenAI API method calls (such as generated text and responses).

Defaults to true if sendDefaultPii is true.

Copied
Sentry.init({
  integrations: [Sentry.openAIIntegration({ recordOutputs: true })],
});

By default this integration adds tracing support to OpenAI API method calls including:

  • chat.completions.create() - Chat completion requests
  • responses.create() - Response API requests

The integration will automatically detect streaming vs non-streaming requests and handle them appropriately.

  • openai: >=4.0.0 <6
Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").