AI Privacy Principles
Learn about how Sentry handles your data and protects your privacy when it comes to AI.
Generative AI features in Sentry, including Seer, are designed with your privacy in mind. Here are the core principles that apply to how we handle your data when it comes to our generative AI features.
Our generative AI features use some of the data you've configured to collect and send to your Sentry instance. This provides additional insights, analysis, and solutions for your review.
The data used for these features includes:
- Error messages
- Stack traces
- Spans and traces
- Logs
- DOM interactions
- Profiles
- Relevant code from linked repositories
By default, your data will not be used to train generative AI models unless you give permission. If you want to help improve the Sentry service, including Seer, you may opt-in to further use of your data for product improvement. Learn more about how your data is protected and used.
By default, our generative AI features are powered by third-party large language models (LLMs) that are hosted within Sentry's production infrastructure and not accessible by the underlying third-party model providers. This is enabled by the infrastructure providers set forth on our subprocessor list. For any features that rely on LLMs hosted outside Sentry's production infrastructure, we will identify the applicable subprocessors.
In all cases, our subprocessors are only permitted to use the data as directed by Sentry and in accordance with the commitments we make to you for the Sentry service, including for data retention and data storage location.
Further, by design, the Sentry AI-generated outputs from your data inputs are shown only to you, and never shared with other customers.
Sentry offers an administrative level control to disable all generative AI features for your organization.
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").