---
title: "Caches"
description: "Learn more about cache monitoring with Sentry and how it can help improve your application's performance."
url: https://docs.sentry.io/product/dashboards/sentry-dashboards/backend/caches/
---

# Caches

A cache can be used to speed up data retrieval, improving application performance. It temporarily stores data to speed up subsequent access to that data, allowing your application to get data from cached memory (if it is available) instead having to repeatedly fetch the data from a potentially slow data layer. Caching can speed up read-heavy workloads for applications like Q\&A portals, gaming, media sharing, and social networking.

A successful cache results in a high hit rate which means the data was present when fetched. A cache miss occurs when the data fetched was not present in the cache. If you have [performance monitoring](https://docs.sentry.io/product/sentry-basics/performance-monitoring.md#how-to-set-up-performance-monitoring) enabled and your application uses caching, you can see how your caches are performing with Sentry.

Sentry's cache monitoring provides insights into cache utilization and latency to help you improve performance on endpoints that interact with caches.

With the Cache dashboard found in [Sentry Dashboards](https://sentry.io/orgredirect/organizations/:orgslug/dashboards/), you get an overview of the transactions within your application that are making at least one lookup against a cache. From there, you can dig into specific cache span operations by clicking a transaction and viewing its sample list.

## [Instrumentation](https://docs.sentry.io/product/dashboards/sentry-dashboards/backend/caches.md#instrumentation)

Cache monitoring currently supports [auto instrumentation](https://docs.sentry.io/platform-redirect.md?next=%2Ftracing%2Finstrumentation%2Fautomatic-instrumentation) for [Django's cache framework](https://docs.djangoproject.com/en/5.0/topics/cache/) when the [cache\_spans option](https://docs.sentry.io/platforms/python/integrations/django.md#options) is set to `True`. Other frameworks require custom instrumentation.

### [Custom instrumentation](https://docs.sentry.io/product/dashboards/sentry-dashboards/backend/caches.md#custom-instrumentation)

If available, custom instrumentation is documented on an environment-by-environment basis as listed below:

* [Python SDK](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation/caches-module.md)
* [JavaScript SDKs](https://docs.sentry.io/platforms/javascript/guides/node/tracing/instrumentation/custom-instrumentation/caches-module.md)
* [PHP SDK](https://docs.sentry.io/platforms/php/tracing/instrumentation/caches-module.md)
* [Java SDK](https://docs.sentry.io/platforms/java/tracing/instrumentation/custom-instrumentation/caches-module.md)
* [Ruby SDK](https://docs.sentry.io/platforms/ruby/tracing/instrumentation/custom-instrumentation/caches-module.md)
* [.NET SDK](https://docs.sentry.io/platforms/dotnet/tracing/instrumentation/custom-instrumentation/caches-module.md)

To see what cache data can be set on spans, see the [Cache Module Developer Specification](https://develop.sentry.dev/sdk/performance/modules/caches/).

## [Caches Dashboard](https://docs.sentry.io/product/dashboards/sentry-dashboards/backend/caches.md#caches-dashboard)

The **Caches** dashboard gives an overview of cache performance across all endpoints for currently selected backend projects with summary graphs for **Miss Rate** and **Requests Per Minute** (throughput). You can use these as a starting point to see if there are any potential cache performance issues, for example, a higher than expected Miss Rate percentage.

If you see an anomaly or want to investigate a time range further, click and drag to select a range directly in the graph and data will be filtered for that specific time range only.

The transaction table shows a list of endpoints that contain at least one `cache.get` span along with:

* Its average value size (the bytes being fetched from cache)
* Requests per minute
* Miss rate percentage (how often did a lookup did not return a value)
* Time spent (total time your application spent on a given transaction)

By default, this table is sorted by most time spent. This means that endpoints at the top are usually really slow, requested very frequently, or both.

Click on a transaction to go to the Transaction Summary page, or explore span samples on the Traces page.

## [Sample List](https://docs.sentry.io/product/dashboards/sentry-dashboards/backend/caches.md#sample-list)

To help you compare the performances of cache hits versus cache misses over time, Sentry automatically surfaces a distribution of both samples for the timeframe selected from the **Caches** page.
