Anthropic
Adds instrumentation for Anthropic API.
This integration works in the Node.js, Cloudflare Workers, and Vercel Edge Functions runtimes. It requires SDK version 10.12.0
or higher.
Import name: Sentry.anthropicAIIntegration
The anthropicAIIntegration
adds instrumentation for the @anthropic-ai/sdk
to capture spans by automatically wrapping Anthropic client calls and recording LLM interactions with configurable input/output recording.
For Cloudflare Workers, you need to manually instrument the Anthropic client using the instrumentAnthropicClient
helper:
import * as Sentry from "@sentry/cloudflare";
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
const client = Sentry.instrumentAnthropicClient(anthropic, {
recordInputs: true,
recordOutputs: true,
});
// Use the wrapped client instead of the original anthropic instance
const response = await client.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello!" }],
});
Type: boolean
Records inputs to Anthropic API method calls (such as prompts and messages).
Defaults to true
if sendDefaultPii
is true
.
Sentry.init({
integrations: [Sentry.anthropicAIIntegration({ recordInputs: true })],
});
Type: boolean
Records outputs from Anthropic API method calls (such as generated text and responses).
Defaults to true
if sendDefaultPii
is true
.
Sentry.init({
integrations: [Sentry.anthropicAIIntegration({ recordOutputs: true })],
});
By default this integration adds tracing support to Anthropic API method calls including:
messages.create()
- Create messages with Claude modelsmessages.stream()
- Stream messages with Claude modelsmessages.countTokens()
- Count tokens for messagesmodels.get()
- Get model informationcompletions.create()
- Create completions (legacy)models.retrieve()
- Retrieve model detailsbeta.messages.create()
- Beta messages API
The integration will automatically detect streaming vs non-streaming requests and handle them appropriately.
@anthropic-ai/sdk
:>=0.19.2 <1.0.0
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").