---
title: "LangGraph"
description: "Adds instrumentation for the LangGraph SDK."
url: https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph/
---

# LangGraph | Sentry for Cloud Functions for Firebase

## [Manual Instrumentation](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#manual-instrumentation)

*Import name: `Sentry.instrumentLangGraph`*

The `instrumentLangGraph` helper adds instrumentation for [`@langchain/langgraph`](https://www.npmjs.com/package/@langchain/langgraph) to capture spans by wrapping a `StateGraph` before compilation and recording AI agent interactions with configurable input/output recording. You need to call this helper on the graph **before** calling `.compile()`.

See example below:

```javascript
import { ChatOpenAI } from "@langchain/openai";
import {
  StateGraph,
  MessagesAnnotation,
  START,
  END,
} from "@langchain/langgraph";
import { SystemMessage, HumanMessage } from "@langchain/core/messages";

// Create LLM call
const llm = new ChatOpenAI({
  modelName: "gpt-4o",
  apiKey: "your-api-key", // Warning: API key will be exposed in browser!
});

async function callLLM(state) {
  const response = await llm.invoke(state.messages);

  return {
    messages: [...state.messages, response],
  };
}

// Create the agent
const agent = new StateGraph(MessagesAnnotation)
  .addNode("agent", callLLM)
  .addEdge(START, "agent")
  .addEdge("agent", END);

// Instrument the graph before compiling
Sentry.instrumentLangGraph(agent, {
  recordInputs: true,
  recordOutputs: true,
});

const graph = agent.compile({ name: "my_agent" });

// Invoke the agent
const result = await graph.invoke({
  messages: [
    new SystemMessage("You are a helpful assistant."),
    new HumanMessage("Hello!"),
  ],
});
```

To customize what data is captured (such as inputs and outputs), see the [Options](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#options) in the Configuration section.

## [Configuration](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#configuration)

### [Options](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#options)

The following options control what data is captured from LangGraph operations:

#### [`recordInputs`](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#recordinputs)

*Type: `boolean` (optional)*

Records inputs to LangGraph operations (such as messages and state data passed to the graph).

Defaults to `true` if `sendDefaultPii` is `true`.

#### [`recordOutputs`](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#recordoutputs)

*Type: `boolean` (optional)*

Records outputs from LangGraph operations (such as generated responses, agent outputs, and final state).

Defaults to `true` if `sendDefaultPii` is `true`.

**Usage**

Using the `instrumentLangGraph` wrapper for **manual instrumentation**:

```javascript
Sentry.instrumentLangGraph(graph, {
  // your options here
});
```

## [Supported Operations](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#supported-operations)

By default, tracing support is added to the following LangGraph SDK calls:

* **Agent Creation** (`gen_ai.create_agent`) - Captures spans when compiling a StateGraph into an executable agent
* **Agent Invocation** (`gen_ai.invoke_agent`) - Captures spans for agent execution via `invoke()`

## [Supported Versions](https://docs.sentry.io/platforms/javascript/guides/firebase/configuration/integrations/langgraph.md#supported-versions)

* `@langchain/langgraph`: `>=0.2.0 <2.0.0`
