Operating Guidelines

This page reviews our guidelines for operation when self-hosting external Relays, that is, Relays that run on your hardware and forward events to sentry.io.

  • We recommend running Relay using the officially provided Docker image (getsentry/relay) found on DockerHub and tagged with its Git revision identifier, rather than building from source.

  • We recommend running at least two Relay instances (containers) with a reverse proxy (such as HAProxy or Nginx) in front of them for improved availability, simplified Relay updates, and SSL/TLS support.

  • To monitor your Relay setup, configure Logging, Metrics, and Health Checks.

The following recommendations assume that Relay is run in Docker.

ResourceRecommendations
CPU- Required: x86-64 (amd64) CPU architecture
- Multi-core CPU [1]
Memory- At least 2GB of RAM per Relay container is recommended.
Network- Bandwidth: Make sure you have enough capacity to receive and forward the planned data volume. Relay compresses all upstream requests to sentry.io by default, but the compression rate might vary depending on the types and shapes of the submitted events.
- Latency: Relay can tolerate network delays up to a certain point. It is however recommended to make sure that the round-trip time to the upstream stays lower than 5 seconds.
Storage- Disk storage is only required if Spooling is configured. See the spool.envelopes.max_memory_size configuration option. Otherwise, Relay does not require disk storage.

[1] Relay is a multi-threaded application that tries to leverage all available CPU cores. As a result, Sentry highly recommends running Relay on multi-core CPUs. If your setup is expected to handle more than 100 requests per second, we recommend running Relay on at least four (4) CPU cores. By default, every Relay instance will use the total number of available cores to adjust the sizes of its thread pools. Adjust this behavior by configuring the limits.max_thread_count.

This example configuration sets up basic logging and metrics settings, as well as changes the default concurrency level.

Copied
---
relay:
  # The upstream hostname is taken from any of your DSNs.
  # Go to your Project Settings, and then to "Client Keys (DSN)" to see them.
  upstream: https://o0.ingest.sentry.io.
  host: 0.0.0.0
  port: 3000
logging:
  level: info
  format: json
metrics:
  statsd: 127.0.0.1:8126
  prefix: relay
limits:
  # Base size of various internal thread pools. Defaults to the number of logical CPU cores
  max_thread_count: 8

See the Configuration Options page for detailed descriptions of all available options.

Relay provides a variety of Configuration Options. Changing some of the options has more impact on Relay's behavior than others. The following list identifies a few options you should check first when you want to tune Relay to your organization's environment and workload:

  • limits.max_concurrent_requests (default: 100)

    Number of concurrent requests your Relay instance can send to the upstream (Sentry). If your event volume or connection latency to Sentry are high, you can increase this value to gain additional throughput, though that increase will be at the expense of additional network connections.

  • cache.event_buffer_size (default: 1000)

    How many events Relay can buffer in its local queue before it starts rejecting new events. Increasing this value will also increase Relay's potential memory consumption when, for example, network issues prevent Relay from forwarding the received messages to Sentry.

  • cache.event_expiry (in seconds, default: 600)

    How long Relay can keep buffered events in memory before dropping them. You can increase this value if you anticipate when your Relay may need to keep events in memory for longer than the default value.

  • cache.project_expiry (in seconds, default: 300)

    To stay operational, Relay regularly fetches project configurations from the Sentry upstream. This setting controls how often Relay fetches that configuration. You can decrease this value to make the configuration propagation more frequent. For example, if you later change data scrubbing options in your project settings in Sentry, your Relay instance will become aware of those changes faster.

  • cache.project_grace_period (in seconds, default: 0)

    How long a project configuration can still be used after having expired. Increasing this value may help when the upstream is unreachable; for example, due to network issues.

SDKs communicate with Sentry on a set of endpoints. Relay provides the same API to become a seamless drop-in replacement. This requires a set of endpoints to be accessible:

  • /api/<project-id>/envelope/
  • /api/<project-id>/minidump/
  • /api/<project-id>/security/
  • /api/<project-id>/store/
  • /api/<project-id>/unreal/

Depending on the SDK or client, requests to these endpoints are performed with compressed content-encoding or chunked transfer-encoding. Depending on the infrastructure in front of Relay, please check the following HTTP headers are set correctly:

  • Host: to the public host name of this Relay
  • X-Forwarded-For: to the client IP address
  • X-Sentry-Auth: to the value provided by the client

Internally, Relay makes requests to the configured upstream to forward data and retrieve project configuration. We highly recommend against constraining these requests. At present, Relay makes requests to the following endpoints for basic operation:

  • All of the above endpoints
  • /api/0/relays/projectconfigs/
  • /api/0/relays/publickeys/
  • /api/0/relays/register/challenge/
  • /api/0/relays/register/response/
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").