Mobile App Profiling

The typical approach to measuring mobile app performance involves setting up automated performance tests for various user flows that run during continuous integration (CI), or manually profiling hot paths in your app using tools like Apple Instruments or Android Profiler. These are excellent tools, but they have numerous drawbacks:

  • The environment is near-optimal and highly controlled. Developers often use the latest, most powerful devices, and are testing apps on excellent network conditions. These conditions do not always reflect real world performance scenarios — users may be running your app on low-end hardware, older operating system versions, or poor network conditions. These conditions can be simulated to some extent, but the complete matrix of variables is too large to be able to test every case on a regular basis.
  • Only a subset of user flows are being tested. Performance tests are typically set up for the most common user flows, but it's impractical to build and maintain tests for every possible way someone might be using your app.
  • Metrics are calculated based on a small number of samples. Performance metrics are often calculated by aggregating the results from a small number of runs. While this can be sufficient in some cases, it may not be enough data to give you an accurate, statistically significant measurement, especially if you need to account for different hardware, thermal conditions, and so on.
  • The process is difficult and time consuming. Manually conducting extensive performance tests on your app before every release is time consuming. Building and maintaining automated performance tests is also time consuming. Interpreting the results can be difficult and frustrating for someone who's not a performance specialist and is unfamiliar with the tools or the process.

Profiling is designed to address all of these issues:

  • The environments are diverse and reflect real world conditions. We collect performance data automatically from every environment that your app is running on in the real world, accounting for different device hardware, OS versions, app versions, network conditions, and more.
  • We cover the widest possible set of user flows. Since the data is collected from actual users using your app as they normally would, we can capture all kinds of usage patterns that you may not have thought about when building your own performance tests.
  • Metrics are calculated using a large number of samples. We can collect data from up to 100% of your user population in production, allowing us to compute more accurate metrics.
  • The process is simple. Integrate the Sentry SDK, enable profiling, ship your app, and let us worry about analyzing performance and detecting regressions between versions. We encourage testing in production, which improves developer velocity and reduces the overhead of having to maintain your own tests and tooling. Our dashboard is designed to be approachable to all engineers, and not just people who have a performance engineering background.
  • The overhead is minimal. Depending on the platform, the overhead can be lower than 1% of your app’s CPU time.
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").