---
title: "Kafka Integration"
description: "Learn how to trace Kafka queue operations with Sentry."
url: https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka/
---

# Kafka Integration | Sentry for Logback

Sentry's Kafka integration lets you trace both production and consumption. In Spring Boot, this happens automatically. If you're using raw `kafka-clients`, you'll need to instrument producers and consumers with `sentry-kafka`.

Once configured, queue spans will appear in Sentry's [Queues dashboard](https://sentry.io/orgredirect/organizations/:orgslug/insights/backend/queues/).

Kafka queue tracing is available in Sentry Java SDK version `8.41.0` and later.

For applications using `kafka-clients` directly (without Spring), use the `sentry-kafka` module. If you're using Kafka through Spring Boot, use the [Spring Boot Kafka docs](https://docs.sentry.io/platforms/java/guides/spring-boot/integrations/kafka.md) instead.

### [Install](https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka.md#install-1)

```groovy
implementation 'io.sentry:sentry-kafka:8.41.0'
```

For other dependency managers, use the same Maven coordinates: `io.sentry:sentry-kafka`.

### [Configure](https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka.md#configure-1)

Enable queue tracing when initializing Sentry:

```java
Sentry.init(options -> {
    options.setDsn("___DSN___");
    options.setTracesSampleRate(1.0);

    options.setEnableQueueTracing(true);

});
```

### [Instrument the Producer](https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka.md#instrument-the-producer)

Wrap your `KafkaProducer` with `SentryKafkaProducer.wrap()`. Every `send()` call then records a `queue.publish` span and injects Sentry propagation headers into the record.

```java
import io.sentry.kafka.SentryKafkaProducer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

KafkaProducer<String, String> rawProducer = new KafkaProducer<>(producerProps);

Producer<String, String> producer = SentryKafkaProducer.wrap(rawProducer);


producer.send(new ProducerRecord<>("orders", "order-payload"));
```

A `queue.publish` span is created only when there is an active transaction in scope. Sentry trace headers are always injected (even without an active span) so the consumer can continue the trace.

### [Instrument the Consumer](https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka.md#instrument-the-consumer)

Wrap each record's processing callback with `SentryKafkaConsumerTracing.withTracing()`. This creates a `queue.process` transaction per record, continues the distributed trace from producer headers, and calculates receive latency automatically.

If you're also using OpenTelemetry Kafka instrumentation, don't instrument the same consumer callback with `withTracing()`. This helper is not automatically suppressed under OpenTelemetry today, so using both can create duplicate `queue.process` transactions.

```java
import io.sentry.kafka.SentryKafkaConsumerTracing;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProps)) {
    consumer.subscribe(List.of("orders"));

    while (running) {
        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(500));
        for (ConsumerRecord<String, String> record : records) {

            SentryKafkaConsumerTracing.withTracing(record, () -> {
                processOrder(record.value());
            });

        }
    }
}
```

Use the `Callable` overload when your processing code throws checked exceptions:

```java
SentryKafkaConsumerTracing.withTracing(record, () -> {
    return processOrder(record.value()); // can throw checked exceptions
});
```

## [Span Data](https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka.md#span-data)

| Attribute                           | Type   | Description                                                                                          |
| ----------------------------------- | ------ | ---------------------------------------------------------------------------------------------------- |
| `messaging.system`                  | string | Always `"kafka"`                                                                                     |
| `messaging.destination.name`        | string | Kafka topic name                                                                                     |
| `messaging.message.id`              | string | Value of the `messaging.message.id` record header, if present                                        |
| `messaging.message.body.size`       | int    | Serialized value size in bytes                                                                       |
| `messaging.message.retry.count`     | int    | Number of previous delivery attempts (from Kafka's `kafka_deliveryAttempt` header), if present       |
| `messaging.message.receive.latency` | int    | Time in milliseconds between the producer sending the record and the consumer starting to process it |

## [Limitations](https://docs.sentry.io/platforms/java/guides/logback/integrations/kafka.md#limitations)

* **Async listeners not supported.** `@KafkaListener` methods that return a `CompletableFuture` or `Mono`/`Flux` are not instrumented correctly; use synchronous listeners.
* **Batch listeners not supported.** `@KafkaListener` methods that consume batches, such as `ConsumerRecords<?, ?>` or `List<ConsumerRecord<...>>`, are not instrumented yet.
* **Spring Boot auto-instrumentation is disabled when using Sentry OpenTelemetry integrations.**
