Vercel AI
Adds instrumentation for Vercel AI SDK.
This integration only works in the Node.js and Bun runtimes. Requires SDK version 9.30.0
or higher.
Import name: Sentry.vercelAIIntegration
The vercelAIIntegration
adds instrumentation for the ai
SDK by Vercel to capture spans using the AI SDK's built-in Telemetry
. Get started with the following snippet:
Sentry.init({
tracesSampleRate: 1.0,
integrations: [
Sentry.vercelAIIntegration({
recordInputs: true,
recordOutputs: true,
}),
],
});
To correctly capture spans, pass the experimental_telemetry
object with isEnabled: true
to every generateText
, generateObject
, and streamText
function call. For more details, see the AI SDK Telemetry Metadata docs.
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: {
isEnabled: true,
},
});
Requires SDK version 9.29.0
or higher.
Type: boolean
Forces the integration to be active, even when the ai
module is not detected or available. This is useful when you want to ensure the integration is always enabled regardless of module detection.
Defaults to false
.
Sentry.init({
integrations: [Sentry.vercelAIIntegration({ force: true })],
});
Requires SDK version 9.27.0
or higher.
Type: boolean
Records inputs to the ai
function call.
Defaults to true
if sendDefaultPii
is true
or if you explicitly set experimental_telemetry.isEnabled
to true
in your ai
function callsites.
Sentry.init({
integrations: [Sentry.vercelAIIntegration({ recordInputs: true })],
});
Requires SDK version 9.27.0
or higher.
Type: boolean
Records outputs to the ai
function call.
Defaults to true
if sendDefaultPii
is true
or if you explicitly set experimental_telemetry.isEnabled
to true
in your ai
function callsites.
Sentry.init({
integrations: [Sentry.vercelAIIntegration({ recordOutputs: true })],
});
In order to make it easier to correlate captured spans with the function calls we recommend setting functionId
in experimental_telemetry
in all generation function calls:
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: {
isEnabled: true,
functionId: "my-awesome-function",
},
});
By default this integration adds tracing support to all ai
function callsites. If you need to disable span collection for a specific call, you can do so by setting experimental_telemetry.isEnabled
to false
in the first argument of the function call.
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: { isEnabled: false },
});
If you set experimental_telemetry.recordInputs
and experimental_telemetry.recordOutputs
it will override the default behavior of collecting inputs and outputs for that function call.
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: {
isEnabled: true,
recordInputs: true,
recordOutputs: true,
},
});
ai
:>=3.0.0 <5
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").