---
title: "LangGraph"
description: "Adds instrumentation for the LangGraph SDK."
url: https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph/
---

# LangGraph | Sentry for Next.js

For meta-framework applications running on both client and server, we recommend **setting up the integration manually** using the [`instrumentLangGraph` wrapper](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#manual-instrumentation) to ensure consistent instrumentation across all runtimes.

## [Automatic Instrumentation](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#automatic-instrumentation)

*Import name: `Sentry.langGraphIntegration`*

If you are using a different runtime (like Bun, Cloudflare Workers or a Browser) or experiencing missing spans, you need to use **[Manual Instrumentation](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#manual-instrumentation)** to explicitly wrap your AI client instance instead.

The `langGraphIntegration` adds instrumentation for [`@langchain/langgraph`](https://www.npmjs.com/package/@langchain/langgraph) to capture spans by automatically wrapping LangGraph operations and recording AI agent interactions including agent invocations, graph executions, and node operations.

In Node.js runtimes, this integration is enabled by default and automatically captures spans for LangGraph SDK calls (requires Sentry SDK version `10.28.0` or higher).

To customize what data is captured (such as inputs and outputs), see the [Options](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#options) in the Configuration section.

## [Manual Instrumentation](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#manual-instrumentation)

*Import name: `Sentry.instrumentLangGraph`*

The `instrumentLangGraph` helper adds instrumentation for [`@langchain/langgraph`](https://www.npmjs.com/package/@langchain/langgraph) to capture spans by wrapping a `StateGraph` before compilation and recording AI agent interactions with configurable input/output recording. You need to call this helper on the graph **before** calling `.compile()`.

See example below:

```javascript
import { ChatOpenAI } from "@langchain/openai";
import {
  StateGraph,
  MessagesAnnotation,
  START,
  END,
} from "@langchain/langgraph";
import { SystemMessage, HumanMessage } from "@langchain/core/messages";

// Create LLM call
const llm = new ChatOpenAI({
  modelName: "gpt-4o",
  apiKey: "your-api-key", // Warning: API key will be exposed in browser!
});

async function callLLM(state) {
  const response = await llm.invoke(state.messages);

  return {
    messages: [...state.messages, response],
  };
}

// Create the agent
const agent = new StateGraph(MessagesAnnotation)
  .addNode("agent", callLLM)
  .addEdge(START, "agent")
  .addEdge("agent", END);

// Instrument the graph before compiling
Sentry.instrumentLangGraph(agent, {
  recordInputs: true,
  recordOutputs: true,
});

const graph = agent.compile({ name: "my_agent" });

// Invoke the agent
const result = await graph.invoke({
  messages: [
    new SystemMessage("You are a helpful assistant."),
    new HumanMessage("Hello!"),
  ],
});
```

To customize what data is captured (such as inputs and outputs), see the [Options](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#options) in the Configuration section.

## [Configuration](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#configuration)

### [Options](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#options)

The following options control what data is captured from LangGraph operations:

#### [`recordInputs`](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#recordinputs)

*Type: `boolean` (optional)*

Records inputs to LangGraph operations (such as messages and state data passed to the graph).

Defaults to `true` if `sendDefaultPii` is `true`.

#### [`recordOutputs`](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#recordoutputs)

*Type: `boolean` (optional)*

Records outputs from LangGraph operations (such as generated responses, agent outputs, and final state).

Defaults to `true` if `sendDefaultPii` is `true`.

**Usage**

Using the `langGraphIntegration` integration for **automatic instrumentation**:

```javascript
Sentry.init({
  dsn: "____PUBLIC_DSN____",
  // Tracing must be enabled for agent monitoring to work
  tracesSampleRate: 1.0,
  integrations: [
    Sentry.langGraphIntegration({
      // your options here
    }),
  ],
});
```

Using the `instrumentLangGraph` wrapper for **manual instrumentation**:

```javascript
Sentry.instrumentLangGraph(graph, {
  // your options here
});
```

## [Supported Operations](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#supported-operations)

By default, tracing support is added to the following LangGraph SDK calls:

* **Agent Creation** (`gen_ai.create_agent`) - Captures spans when compiling a StateGraph into an executable agent
* **Agent Invocation** (`gen_ai.invoke_agent`) - Captures spans for agent execution via `invoke()`

## [Supported Versions](https://docs.sentry.io/platforms/javascript/guides/nextjs/configuration/integrations/langgraph.md#supported-versions)

* `@langchain/langgraph`: `>=0.2.0 <2.0.0`
