---
title: "Conversations"
description: "Monitor past conversations with your AI assistants. Replay every message and tool call to understand what happened and where things went wrong."
url: https://docs.sentry.io/ai/monitoring/conversations/
---

# Conversations

Conversations is currently in **closed beta**. To participate, reach out at <support@sentry.io>.

Sentry's Conversations view lets you observe the interactions users have had with your chat-based AI assistants. It provides a user-like view into past conversations, showing the full exchange of messages and tool calls so you can understand exactly what happened during each session.

You can find it in **Explore > Conversations** in the Sentry sidebar.

## [Prerequisites](https://docs.sentry.io/ai/monitoring/conversations.md#prerequisites)

Conversations are built on top of [AI Agent Monitoring](https://docs.sentry.io/ai/monitoring/agents.md). Before you can use them, you need:

1. **Tracing enabled** with the Sentry SDK configured for your AI agent project. Follow the [Agent Monitoring getting started guide](https://docs.sentry.io/ai/monitoring/agents/getting-started.md) if you haven't already.

2. **A conversation ID on your spans.** Sentry groups spans into conversations using the `gen_ai.conversation.id` attribute. You can set this manually, or some SDK integrations infer it automatically.

## [Conversation ID](https://docs.sentry.io/ai/monitoring/conversations.md#conversation-id)

A conversation is a collection of spans that share the same `gen_ai.conversation.id`. This is typically the ID of the chat session in your application (for example, the session ID you store in your database).

Some SDK integrations (such as OpenAI Agents SDK for Python and OpenAI SDK for Node) automatically infer the conversation ID. For all other integrations, you need to set it manually. See your platform's AI agent monitoring guide for setup instructions:

* [JavaScript/Node](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#tracking-conversations)
* [Python](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation/ai-agents-module.md#tracking-conversations)

### [Conversations and Traces](https://docs.sentry.io/ai/monitoring/conversations.md#conversations-and-traces)

Conversations and traces are independent concepts. A single conversation can span multiple traces. For example, if a user refreshes the page mid-conversation, the browser starts a new trace, but the conversation continues with the same ID.

The reverse is also true: a single trace can contain spans from different conversations. For example, if a user starts a new chat session without refreshing the page, the new conversation's spans appear in the same trace as the previous one.

## [Conversations List](https://docs.sentry.io/ai/monitoring/conversations.md#conversations-list)

The [Conversations](https://sentry.io/orgredirect/organizations/:orgslug/insights/conversations/) page shows the most recent conversations that match your filters.

Each row in the list displays:

* **First input** — the first user message in the conversation
* **Last output** — the most recent assistant response
* **Cost** — estimated dollar cost and token usage
* **LLM calls** — number of LLM generation requests
* **Tool calls** — number of tool executions

Use the filters at the top of the page to narrow results by project, date range, or agent.

## [Conversation Detail](https://docs.sentry.io/ai/monitoring/conversations.md#conversation-detail)

Click any conversation to open the detail view in a drawer.

The detail view shows a chat-like interface with the full message history: user inputs, assistant responses, and tool calls. Click on any message to see the underlying spans, including individual LLM generations and tool executions, with timing and error information.

This makes it straightforward to trace a conversation from start to finish and pinpoint where things went wrong.
