This page reviews our guidelines for operation when self-hosting external Relays, that is, Relays that run on your hardware and forward events to
The following recommendations assume that Relay is run in Docker.
 Relay is a multi-threaded application that tries to leverage all available CPU cores. As a result, Sentry highly recommends running Relay on multi-core CPUs. If your setup is expected to handle more than 100 requests per second, we recommend running Relay on at least four (4) CPU cores. By default, every Relay instance will use the total number of available cores to adjust the sizes of its thread pools. Adjust this behavior by configuring the
This example configuration sets up basic logging and metrics settings, as well as changes the default concurrency level.
--- relay: # The upstream hostname is taken from any of your DSNs. # Go to your Project Settings, and then to "Client Keys (DSN)" to see them. upstream: https://o0.ingest.sentry.io. host: 0.0.0.0 port: 3000 logging: level: info format: json metrics: statsd: 127.0.0.1:8126 prefix: relay limits: # Base size of various internal thread pools. Defaults to the number of logical CPU cores max_thread_count: 8
See the Configuration Options page for detailed descriptions of all available options.
Relay provides a variety of Configuration Options. Changing some of the options has more impact on Relay's behavior than others. The following list identifies a few options you should check first when you want to tune Relay to your organization's environment and workload:
Number of concurrent requests your Relay instance can send to the upstream (Sentry). If your event volume or connection latency to Sentry are high, you can increase this value to gain additional throughput, though that increase will be at the expense of additional network connections.
How many events Relay can buffer in its local queue before it starts rejecting new events. Increasing this value will also increase Relay's potential memory consumption when, for example, network issues prevent Relay from forwarding the received messages to Sentry.
cache.event_expiry(in seconds, default: 600)
How long Relay can keep buffered events in memory before dropping them. You can increase this value if you anticipate when your Relay may need to keep events in memory for longer than the default value.
cache.project_expiry(in seconds, default: 300)
To stay operational, Relay regularly fetchesprojectRepresents your service in Sentry and allows you to scope events to a distinct application.configurations from the Sentry upstream. This setting controls how often Relay fetches that configuration. You can decrease this value to make the configuration propagation more frequent. For example, if you later change data scrubbing options in your project settings in Sentry, your Relay instance will become aware of those changes faster.
cache.project_grace_period(in seconds, default: 0)
How long a project configuration can still be used after having expired. Increasing this value may help when the upstream is unreachable; for example, due to network issues.
SDKs communicate with Sentry on a set of endpoints. Relay provides the same API to become a seamless drop-in replacement. This requires a set of endpoints to be accessible:
Depending on the SDK or client, requests to these endpoints are performed with compressed content-encoding or chunked transfer-encoding. Depending on the infrastructure in front of Relay, please check the following HTTP headers are set correctly:
Host: to the public host name of this Relay
X-Forwarded-For: to the client IP address
X-Sentry-Auth: to the value provided by the client
Internally, Relay makes requests to the configured upstream to forward data and retrieve
- All of the above endpoints
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) to suggesting an update ("yeah, this would be better").