---
title: "Troubleshooting"
description: "While we don't expect most users of our SDK to run into these issues, we document edge cases here."
url: https://docs.sentry.io/platforms/python/troubleshooting/
---

# Troubleshooting | Sentry for Python

## [General](https://docs.sentry.io/platforms/python/troubleshooting.md#general)

Use the information in this page to help answer these questions:

* "What do I do if scope data is leaking between requests?"
* "What do I do if my transaction has nested spans when they should be parallel?"
* "What do I do if the SDK has trouble sending events to Sentry?"

Addressing Concurrency Issues

The short answer to the first two questions: make sure your `contextvars` work and that your isolation scope was cloned for each concurrency unit.

Python supports several distinct solutions to concurrency, including threads and coroutines.

The Python SDK does its best to figure out how contextual data such as tags set with `sentry_sdk.set_tags` is supposed to flow along your control flow. In most cases it works perfectly, but in a few situations some special care must be taken. This is specially true when working with a code base doing concurrency outside of the provided framework integrations.

The general recommendation is to have one isolation scope per "concurrency unit" (thread/coroutine/etc). The SDK ensures every thread has an independent scope via the `ThreadingIntegration`. If you do concurrency with `asyncio` coroutines, make sure to use the `AsyncioIntegration` which will clone the correct scope in your `Task`s.

The general pattern of usage for creating a new isolation scope is:

```python
with sentry_sdk.isolation_scope() as scope:
# In this block scope refers to a new fork of the original isolation scope,
# with the same client and the same initial scope data.
```

See the [Threading](https://docs.sentry.io/platforms/python/integrations/default-integrations.md#threading) section for a more complete example that involves forking the isolation scope.

Context Variables vs Thread Locals

The Python SDK uses [thread locals](https://docs.python.org/3/library/threading.html#thread-local-data) to keep contextual data where it belongs. There are a few situations where this approach fails.

Read along if you cannot figure out why contextual data is leaking across HTTP requests, or data is missing or popping up at the wrong place and time.

#### [Python 2: Thread Locals and gevent](https://docs.sentry.io/platforms/python/troubleshooting.md#python-2-thread-locals-and-gevent)

If the SDK is installed on Python 2, there is not much else to use than the aforementioned thread locals, so the SDK will use just that.

Code that uses async libraries such as **`twisted` is not supported** in the sense that you will experience context data leaking across tasks/any logical boundaries, at least out of the box.

Code that uses more "magical" async libraries such as **`gevent` or `eventlet` will work just fine** provided those libraries are configured to monkeypatch the stdlib. If you are only using those libraries in the context of running `gunicorn` that is the case, for example.

#### [Python 3: Context Variables or Thread Locals](https://docs.sentry.io/platforms/python/troubleshooting.md#python-3-context-variables-or-thread-locals)

Python 3 introduced `asyncio`, which, just like Twisted, had the problem of not having any concept of attaching contextual data to your control flow. That means in Python 3.6 and lower, the SDK is not able to prevent leaks of contextual data.

Python 3.7 rectified this problem with the `contextvars` stdlib module which is basically thread locals that also work in asyncio-based code. The SDK will attempt to use that module instead of thread locals if available.

**For Python 3.6 and older, install `aiocontextvars` from PyPI** which is a fully-functional backport of `contextvars`. The SDK will check for this package and use it instead of thread locals.

Context Variables vs gevent/eventlet

If you are using `gevent` (older than 20.5) or `eventlet` in your application and have configured it to monkeypatch the stdlib, the SDK will abstain from using `contextvars` even if it is available.

The reason for that is that both of those libraries will monkeypatch the `threading` module only, and not the `contextvars` module.

The real-world usecase where this actually comes up is if you're using Django 3.0 within a `gunicorn+gevent` worker on Python 3.7. In such a scenario the monkeypatched `threading` module will honor the control flow of a gunicorn worker while the unpatched `contextvars` will not.

It gets more complicated if you're using Django Channels in the same app, but a separate server process, as this is a legitimate usage of `asyncio` for which `contextvars` behaves more correctly. Make sure that your channels websocket server does not import or use gevent at all (and much less call `gevent.monkey.patch_all`), and you should be good.

Even then there are still edge cases where this behavior is flat-out broken, such as mixing asyncio code with gevent/eventlet-based code. In that case there is no right, *static* answer as to which context library to use. Even then gevent's aggressive monkeypatching is likely to interfere in a way that cannot be fixed from within the SDK.

This [issue has been fixed with gevent 20.5](https://github.com/gevent/gevent/issues/1407) but continues to be one for eventlet.

Network Issues

Your SDK might have issues sending events to Sentry. You might see `"Remote end closed connection without response"`, `"Connection aborted"`, `"Connection reset by peer"`, or similar error messages in your logs. In case of errors and tracing data, this manifests as errors and transactions missing in Sentry. In case of cron monitors, you might be seeing crons marked as timed out in Sentry when you know they've run successfully. The SDK itself might be logging errors about the connection getting reset or the server closing the connection without response.

If you experience this, try turning on the [keep-alive](https://docs.sentry.io/platforms/python/configuration/options.md#keep-alive) configuration option, available in SDK versions `1.43.0` and up.

```python
import sentry_sdk

sentry_sdk.init(
    # your usual options
    keep_alive=True,
)
```

If you need more fine-grained control over the behavior of the socket, check out [socket-options](https://docs.sentry.io/platforms/python/configuration/options.md#socket-options).

Multiprocessing deprecation in Python 3.12 and 3.13

If you're on Python version 3.12 or 3.13, you might see the following deprecation warning on Linux environments since the SDK spawns several threads.

```bash
DeprecationWarning: This process is multi-threaded, use of fork() may lead to deadlocks in the child.
```

To remove this deprecation warning, set the [multiprocessing start method to `spawn` or `forkserver`](https://docs.python.org/3.14/library/multiprocessing.html#contexts-and-start-methods). Remember to do this only in the `__main__` block.

```python
import sentry_sdk
import multiprocessing
import concurrent.futures

sentry_sdk.init()

if __name__ == "__main__":
    multiprocessing.set_start_method("forkserver")  # or "spawn"
    pool = concurrent.futures.ProcessPoolExecutor()
    pool.submit(sentry_sdk.capture_message, "world")
```

`fork` was the default start method on POSIX platforms on Python 3.13 and lower. In Python 3.14, the default start method on POSIX platforms was changed to `forkserver`. On Windows and macOS the default is `spawn`.

Why was my tag value truncated?

Currently, every tag has a maximum character limit of 200 characters. Tags over the 200 character limit will become truncated, losing potentially important information. To retain this data, you can split data over several tags instead.

For example, a 200+ character tagged request:

`https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=161803398874989484820458683436563811772030917980576`

The 200+ character request above will become truncated to:

`https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=1618033988749894848`

Instead, using `span.set_tag` and `span.set_data` preserves the details of this query using structured metadata. This could be done over `base_url`, `endpoint`, and `parameters`:

```python
import sentry_sdk

# ...

base_url = "https://empowerplant.io"
endpoint = "/api/0/projects/ep/setup_form"
parameters = {
    "user_id": 314159265358979323846264338327,
    "tracking_id": "EasyAsABC123OrSimpleAsDoReMi",
    "product_name": PlantToHumanTranslator,
    "product_id": 161803398874989484820458683436563811772030917980576,
}

with sentry_sdk.start_span(op="request", transaction="setup form") as span:
    span.set_tag("base_url", base_url)
    span.set_tag("endpoint", endpoint)
    span.set_data("parameters", parameters)
    make_request(
        "{base_url}/{endpoint}/".format(
            base_url=base_url,
            endpoint=endpoint,
        ),
        data=parameters
    )

    # ...
```

## [Profiling](https://docs.sentry.io/platforms/python/troubleshooting.md#profiling)

Why am I not seeing any profiling data?

If you don't see any profiling data in [sentry.io](https://sentry.io), you can try the following:

* Ensure that [Tracing is enabled](https://docs.sentry.io/platforms/python/tracing.md).
* Ensure that the automatic instrumentation is sending performance data to Sentry by going to the **Performance** page in [sentry.io](https://sentry.io).
* If the automatic instrumentation is not sending performance data, try using [custom instrumentation](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation.md).
* Enable [debug mode](https://docs.sentry.io/platforms/python/configuration/options.md#debug) in the SDK and check the logs.

### [Upgrading From Older SDK Versions](https://docs.sentry.io/platforms/python/troubleshooting.md#upgrading-from-older-sdk-versions)

#### [Transaction-Based Profiling](https://docs.sentry.io/platforms/python/troubleshooting.md#transaction-based-profiling)

The transaction-based profiling feature was experimental prior to version `1.18.0`. To update your SDK to the latest version, remove `profiles_sample_rate` from `_experiments` and set it in the top-level options.

```python
sentry_sdk.init(
    dsn="___PUBLIC_DSN___",
    traces_sample_rate=1.0,
-   _experiments={
-       "profiles_sample_rate": 1.0,  # for versions before 1.18.0
-   },
+   profiles_sample_rate=1.0,
)
```

#### [Continuous Profiling](https://docs.sentry.io/platforms/python/troubleshooting.md#continuous-profiling)

The continuous profiling feature was experimental prior to version `2.24.1`. To upgrade your SDK to the latest version:

* Remove `continuous_profiling_auto_start` from `_experiments` and set `profile_lifecycle="trace"` in the top-level options.
* Add `profile_session_sample_rate` to the top-level options.

## [Crons](https://docs.sentry.io/platforms/python/troubleshooting.md#crons)

Why aren't recurring job errors showing up on my monitor details page?

You may not have [linked errors to your monitor](https://docs.sentry.io/platforms/python/crons.md#connecting-errors-to-cron-monitors).

Why are my monitors showing up as failed?

The SDK might be experiencing network issues. Learn more about [troubleshooting network issues](https://docs.sentry.io/platforms/python/troubleshooting.md#network-issues).

Why am I not receiving alerts when my monitor fails?

You may not have [set up alerts for your monitor](https://docs.sentry.io/platforms/python/crons.md#alerts).

What is the crons data retention policy for check-ins?

Our current data retention policy is 90 days.

Do you support a monitor schedule with a six-field crontab expression?

Currently, we only support crontab expressions with five fields.

Can I monitor async tasks as well?

Yes, just make sure you're using SDK version `1.44.1` or higher since that's when support for monitoring async functions was added.
