We expect most users of the Python SDK not to run into any of the problems documented here.
Use the information in this page to help answer these questions:
- "What do I do if scope data is leaking between requests?"
- "What do I do if my transaction has nested spans when they should be parallel?"
The short answer to those: check if your
contextvars work and you clone the
hub where needed.
Python supports several distinct solutions to concurrency, including threads and coroutines.
The Python SDK does its best to figure out how contextual data such as tags set
sentry_sdk.set_tags is supposed to flow along your control flow. In most
cases it works perfectly, but in a few situations some special care must be
taken. This is specially true when working with a code base doing concurrency
outside of the provided framework integrations.
The general recommendation is to have one hub per "concurrency unit"
(thread/coroutine/etc). The SDK ensures every thread has an independent hub. If
you do concurrency with
asyncio coroutines, clone the current hub for use
within a block that runs concurrent code:
with Hub(Hub.current): # in this block Hub.current refers to a new clone # of the original hub, with the same client and # the same initial scope data.
asyncio have then an easy workaround: every
that really does run concurrently with other coroutines needs to be made into a
task, then the hub needs to be cloned and reassigned.
See the Threading section for a more complete example that involves cloning the current hub.
The Python SDK uses thread locals to keep contextual data where it belongs. There are a few situations where this approach fails.
Read along if you cannot figure out why contextual data is leaking across HTTP requests, or data is missing or popping up at the wrong place and time.
If the SDK is installed on Python 2, there is not much else to use than the aforementioned thread locals, so the SDK will use just that.
Code that uses async libraries such as
twisted is not supported in the
sense that you will experience context data leaking across tasks/any logical
boundaries, at least out of the box.
Code that uses more "magical" async libraries such as
will work just fine provided those libraries are configured to monkeypatch
the stdlib. If you are only using those libraries in the context of running
gunicorn that is the case, for example.
Python 3 introduced
asyncio, which, just like Twisted, had the problem of not
having any concept of attaching contextual data to your control flow. That
means in Python 3.6 and lower, the SDK is not able to prevent leaks of
Python 3.7 rectified this problem with the
contextvars stdlib module which is
basically thread locals that also work in asyncio-based code. The SDK will
attempt to use that module instead of thread locals if available.
For Python 3.6 and older, install
aiocontextvars from PyPI which is a
fully-functional backport of
contextvars. The SDK will check for this package
and use it instead of thread locals.
If you are using
gevent (older than 20.5) or
eventlet in your application and
have configured it to monkeypatch the stdlib, the SDK will abstain from using
contextvars even if it is available.
The reason for that is that both of those libraries will monkeypatch the
threading module only, and not the
The real-world usecase where this actually comes up is if you're using Django
3.0 within a
gunicorn+gevent worker on Python 3.7. In such a scenario the
threading module will honor the control flow of a gunicorn
worker while the unpatched
contextvars will not.
It gets more complicated if you're using Django Channels in the same app, but a
separate server process, as this is a legitimate usage of
asyncio for which
contextvars behaves more correctly. Make sure that your channels websocket
server does not import or use gevent at all (and much less call
gevent.monkey.patch_all), and you should be good.
Even then there are still edge cases where this behavior is flat-out broken, such as mixing asyncio code with gevent/eventlet-based code. In that case there is no right, static answer as to which context library to use. Even then gevent's aggressive monkeypatching is likely to interfere in a way that cannot be fixed from within the SDK.
This issue has been fixed with gevent 20.5 but continues to be one for eventlet.
A Django application using Channels 2.0 will be correctly instrumented under Python 3.7. For older versions of Python, install
aiocontextvars from PyPI or your application will not start.
If you experience memory leaks in your channels' consumers while using the SDK, you need to wrap your entire application in Sentry's ASGI middleware. Unfortunately the SDK is not able to do so by itself, as Channels is missing some hooks for instrumentation.