Troubleshooting

While we don't expect most users of our SDK to run into these issues, we document edge cases here.

Use the information in this page to help answer these questions:

  • "What do I do if scope data is leaking between requests?"
  • "What do I do if my transaction has nested spans when they should be parallel?"
  • "What do I do if the SDK has trouble sending events to Sentry?"
Addressing Concurrency Issues

The short answer to the first two questions: make sure your contextvars work and that your isolation scope was cloned for each concurrency unit.

Python supports several distinct solutions to concurrency, including threads and coroutines.

The Python SDK does its best to figure out how contextual data such as tags set with sentry_sdk.set_tags is supposed to flow along your control flow. In most cases it works perfectly, but in a few situations some special care must be taken. This is specially true when working with a code base doing concurrency outside of the provided framework integrations.

The general recommendation is to have one isolation scope per "concurrency unit" (thread/coroutine/etc). The SDK ensures every thread has an independent scope via the ThreadingIntegration. If you do concurrency with asyncio coroutines, make sure to use the AsyncioIntegration which will clone the correct scope in your Tasks.

The general pattern of usage for creating a new isolation scope is:

Copied
with sentry_sdk.isolation_scope() as scope:
# In this block scope refers to a new fork of the original isolation scope,
# with the same client and the same initial scope data.

See the Threading section for a more complete example that involves forking the isolation scope.

Context Variables vs Thread Locals

The Python SDK uses thread locals to keep contextual data where it belongs. There are a few situations where this approach fails.

Read along if you cannot figure out why contextual data is leaking across HTTP requests, or data is missing or popping up at the wrong place and time.

If the SDK is installed on Python 2, there is not much else to use than the aforementioned thread locals, so the SDK will use just that.

Code that uses async libraries such as twisted is not supported in the sense that you will experience context data leaking across tasks/any logical boundaries, at least out of the box.

Code that uses more "magical" async libraries such as gevent or eventlet will work just fine provided those libraries are configured to monkeypatch the stdlib. If you are only using those libraries in the context of running gunicorn that is the case, for example.

Python 3 introduced asyncio, which, just like Twisted, had the problem of not having any concept of attaching contextual data to your control flow. That means in Python 3.6 and lower, the SDK is not able to prevent leaks of contextual data.

Python 3.7 rectified this problem with the contextvars stdlib module which is basically thread locals that also work in asyncio-based code. The SDK will attempt to use that module instead of thread locals if available.

For Python 3.6 and older, install aiocontextvars from PyPI which is a fully-functional backport of contextvars. The SDK will check for this package and use it instead of thread locals.

Context Variables vs gevent/eventlet

If you are using gevent (older than 20.5) or eventlet in your application and have configured it to monkeypatch the stdlib, the SDK will abstain from using contextvars even if it is available.

The reason for that is that both of those libraries will monkeypatch the threading module only, and not the contextvars module.

The real-world usecase where this actually comes up is if you're using Django 3.0 within a gunicorn+gevent worker on Python 3.7. In such a scenario the monkeypatched threading module will honor the control flow of a gunicorn worker while the unpatched contextvars will not.

It gets more complicated if you're using Django Channels in the same app, but a separate server process, as this is a legitimate usage of asyncio for which contextvars behaves more correctly. Make sure that your channels websocket server does not import or use gevent at all (and much less call gevent.monkey.patch_all), and you should be good.

Even then there are still edge cases where this behavior is flat-out broken, such as mixing asyncio code with gevent/eventlet-based code. In that case there is no right, static answer as to which context library to use. Even then gevent's aggressive monkeypatching is likely to interfere in a way that cannot be fixed from within the SDK.

This issue has been fixed with gevent 20.5 but continues to be one for eventlet.

Network Issues

Your SDK might have issues sending events to Sentry. You might see "Remote end closed connection without response", "Connection aborted", "Connection reset by peer", or similar error messages in your logs. In case of errors and tracing data, this manifests as errors and transactions missing in Sentry. In case of cron monitors, you might be seeing crons marked as timed out in Sentry when you know they've run successfully. The SDK itself might be logging errors about the connection getting reset or the server closing the connection without response.

If you experience this, try turning on the keep-alive configuration option, available in SDK versions 1.43.0 and up.

Copied
import sentry_sdk

sentry_sdk.init(
    # your usual options
    keep_alive=True,
)

If you need more fine-grained control over the behavior of the socket, check out socket-options.

Multiprocessing deprecation after Python 3.12

If you're on Python version 3.12 or greater, you might see the following deprecation warning on Linux environments since the SDK spawns several threads.

Copied
DeprecationWarning: This process is multi-threaded, use of fork() may lead to deadlocks in the child.

To remove this deprecation warning, set the multiprocessing start method to spawn or forkserver. Remember to do this only in the __main__ block.

Copied
import sentry_sdk
import multiprocessing
import concurrent.futures

sentry_sdk.init()

if __name__ == "__main__":
    multiprocessing.set_start_method("spawn")
    pool = concurrent.futures.ProcessPoolExecutor()
    pool.submit(sentry_sdk.capture_message, "world")
Why was my tag value truncated?

Currently, every tag has a maximum character limit of 200 characters. Tags over the 200 character limit will become truncated, losing potentially important information. To retain this data, you can split data over several tags instead.

For example, a 200+ character tagged request:

https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=161803398874989484820458683436563811772030917980576

The 200+ character request above will become truncated to:

https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=1618033988749894848

Instead, using span.set_tag and span.set_data preserves the details of this query using structured metadata. This could be done over base_url, endpoint, and parameters:

Copied
import sentry_sdk

# ...

base_url = "https://empowerplant.io"
endpoint = "/api/0/projects/ep/setup_form"
parameters = {
    "user_id": 314159265358979323846264338327,
    "tracking_id": "EasyAsABC123OrSimpleAsDoReMi",
    "product_name": PlantToHumanTranslator,
    "product_id": 161803398874989484820458683436563811772030917980576,
}

with sentry_sdk.start_span(op="request", transaction="setup form") as span:
    span.set_tag("base_url", base_url)
    span.set_tag("endpoint", endpoint)
    span.set_data("parameters", parameters)
    make_request(
        "{base_url}/{endpoint}/".format(
            base_url=base_url,
            endpoint=endpoint,
        ),
        data=parameters
    )

    # ...

Why am I not seeing any profiling data?

If you don't see any profiling data in sentry.io, you can try the following:

  • Ensure that Tracing is enabled.
  • Ensure that the automatic instrumentation is sending performance data to Sentry by going to the Performance page in sentry.io.
  • If the automatic instrumentation is not sending performance data, try using custom instrumentation.
  • Enable debug mode in the SDK and check the logs.

The feature was experimental prior to version 1.17.0. To update your SDK to the latest version, remove profiles_sample_rate from _experiments and set it in the top-level options.

Copied
sentry_sdk.init(
    dsn="https://examplePublicKey@o0.ingest.sentry.io/0",
    traces_sample_rate=1.0,
    _experiments={
      "profiles_sample_rate": 1.0,  # for versions before 1.17.0
    },
)

Why aren't recurring job errors showing up on my monitor details page?
Why are my monitors showing up as failed?

The SDK might be experiencing network issues. Learn more about troubleshooting network issues.

Why am I not receiving alerts when my monitor fails?
What is the crons data retention policy for check-ins?

Our current data retention policy is 90 days.

Do you support a monitor schedule with a six-field crontab expression?

Currently, we only support crontab expressions with five fields.

Can I monitor async tasks as well?

Yes, just make sure you're using SDK version 1.44.1 or higher since that's when support for monitoring async functions was added.

Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").