AI Privacy and Security
Learn about how AI features in Sentry handle your data securely and protect your privacy.
Generative AI features in Sentry, including Seer, are designed with your privacy and security in mind. We take the following measures to ensure that your data is handled securely:
We use the data listed below to provide insights, analysis, and solutions for your review. Your data will not be used to train any generative AI models by default and without your express consent, and AI-generated output from your data is shown only to you, not other customers.
Our generative AI features are powered by large language models (LLMs) hosted by subprocessors identified on our subprocessor list. Our subprocessors are only permitted to use the data as directed by us.
The data used for these features includes:
- Error messages
- Stack traces
- Spans and traces
- Logs
- DOM interactions
- Profiles
- Relevant code from linked repositories
For EU region customers, data is stored in the European Union.
You can learn more about our data privacy practices here.
Your data will not be used to train any generative AI models by default and without your express consent, and AI-generated output from your data is shown only to you, not other customers.
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").