Skip to main content
Engineering NotesEngineering Notes

Performance Budgets for Mobile Teams

5 January 2025

SLOs for cold start, ANR, and crash-free—and gates in CI so we don’t ship regressions.

Share on LinkedIn

TL;DR

Define SLOs (cold start, ANR, crash-free) and add a CI benchmark job that compares to a baseline. Use baseline profiles and lazy init. Fail the build or block release when benchmarks regress.

Architecture

Define SLOs up front: cold start (time to first frame), ANR rate, crash-free rate. Add a CI job that runs the app (or a slim harness), captures startup time and frame times, and compares to a baseline. Store baselines in repo or CI artifacts. On regression, fail the build or require override. Use baseline profiles (Android) and equivalent optimizations so the critical path is pre-compiled. Lazy-init non-critical path (e.g. analytics, feature flags) after first frame.

Failure modes

  • Benchmarks flaky so teams ignore or disable them.
  • Cold start improves but ANR gets worse (e.g. more work deferred to first frame).
  • Baseline profile not regenerated after big refactors.

Testing checklist

  • CI benchmark job runs on every PR; compare to baseline.
  • Manual or automated test: cold start under 2s on mid-range device.
  • ANR monitoring in production; alert on regression.

What I'd do differently next time

I’d add a “performance review” gate in the release process: someone (or a bot) signs off that the release doesn’t regress SLOs, so it’s not only CI but also a human checkpoint for big launches.