Performance Tuning

This document describes a set of best practices which may help you squeeze more performance out of various Sentry configurations.


All Redis usage in Sentry is temporal, which means the append-log/fsync models in Redis do not need to apply.

With that in mind, we recommend the following changes to (some) default configurations:

  • Disable saving by removing all save XXXX lines.
  • Set maxmemory-policy allkeys-lru to aggressively prune all keys.
  • Set maxmemory 1gb to a reasonable allowance.

Web Server

Switching Sentry to run in uwsgi mode as opposed to http is a way to yield some better results. uwsgi protocol is a binary protocol that Nginx can speak using the ngx_http_uwsgi_module.

This can be enabled by adding to your SENTRY_WEB_OPTIONS inside

    'protocol': 'uwsgi',

With Sentry running in uwsgi protocol mode, it’ll require a slight modification to your nginx config to use uwsgi_pass rather than proxy_pass:

server {
  listen   443 ssl;

  location / {
    include     uwsgi_params;

You also will likely want to run more web processes, which will spawn as children of the Sentry master process. The default number of workers is 3. It’s possible to bump this up to 36 or more depending on how many cores you have on the machine. You can do this either by editing SENTRY_WEB_OPTIONS again:

    'workers': 16,

or can be passed through the command line as:

$ sentry run web -w 16

See uWSGI’s official documentation for more options that can be configured in SENTRY_WEB_OPTIONS.


The workers can be difficult to tune. Your goal is to maximize the CPU usage without running out of memory. If you have JavaScript clients this becomes more difficult, as currently the sourcemap and context scraping can buffer large amounts of memory depending on your configurations and the size of your source files.

We can leverage supervisord to do this for us:

command=/www/sentry/bin/sentry run worker -c 4 -l WARNING -n worker-%(process_num)02d

If you’re running a worker configuration with a high concurrency level (> 4) we suggest decreasing it and running more masters as this will alleviate lock contention and improve overall throughput.

e.g. if you had something like:

command=sentry run worker -c 64

change it to:

command=sentry run worker -c 4

Monitoring Memory

There are cases where Sentry currently buffers large amounts of memory. This may depend on the client (javascript vs python) as well as the size of your events. If you repeatedly run into issues where workers or web nodes are using a lot of memory, you’ll want to ensure you have some mechanisms for monitoring and resolving this.

If you’re using supervisord, we recommend taking a look at superlance which aids in this situation:

command=memmon -a 400MB -m