“We don’t track our users” is a sentence that has become almost meaningless. Every app says it, most apps do something anyway, and users have learned to ignore the phrase the way they ignore “your call is important to us.”
This post is an honest look at what “we don’t track” has to actually mean if you take it seriously, why the default path in 2026 does the opposite, and what it costs to stay on the other side of that line.
What “we don’t track” actually rules out
When a product says “privacy-first,” the right question to ask is: which revenue streams, SDKs, and convenience features did you drop to earn that phrase?
For us the answer is a long list.
No analytics SDKs. Not the popular ones, not the “anonymised” ones, not even the ones that promise they only collect counters. The moment any analytics SDK lives in the binary, your users are doing a quiet handshake with someone else’s server on every launch.
No ad networks. No banners, no “free with ads” tier. Ad SDKs are often the worst offenders for fingerprinting, because their business model demands it.
No remote config. Convenient for A/B testing, sure, but every app instance has to phone home on open and identify itself, which we won’t do.
No crash reporting that uploads by default. Crash logs can leak sensitive data. We ship with local crash handlers and no auto-upload.
No telemetry. No usage metrics, no retention funnels, no “which feature is most used.” We ship without knowing, and we rely on user feedback to tell us.
No accounts. No sign-up, no OAuth, no email capture. The apps work on a fresh install with zero network requests.
No cloud sync by default. Where sync exists in one of our apps, it runs over infrastructure the user already controls (a git server, their own cloud account, whatever), never ours. In most cases the data never leaves the device at all. OXI Calendar is a concrete example: every note, reminder, and event lives in a local SQLite database on the user’s Mac, and there is no server on our side to compromise.
That’s a lot of “no.” The “yes” side is shorter: our apps open, they work offline, and nothing about your usage leaves your device.
Why this is harder than it looks
The reason “privacy-first” deserves to be a feature claim is that almost every framework, template, and starter kit in 2026 does the opposite by default.
Spin up a new Flutter project and follow “the normal stack” tutorials. A hosted backend, a hosted crash reporter, a remote-config service for flags, an analytics funnel tracker. Your app now has half a dozen third parties with persistent identifiers for every user. Nothing in the IDE warns you. The tutorials don’t mention it. The App Store review process doesn’t flag it. It is simply the path of least resistance.
Picking the other path means writing your own local crash handler instead of plugging in a hosted one. Shipping feature flags in code and accepting a slower release loop instead of flipping a remote switch. Making UX decisions from user interviews instead of retention dashboards. Saying no to an entire category of partnership deals from analytics vendors, growth firms, and ad networks. Explaining on every pitch call that you do not have Daily Active User numbers to share, and that you are not going to start pretending you do.
It’s slower. It’s a real commercial tradeoff. We picked it because the alternative (normalising surveillance because the tooling is easier) is how the category got here in the first place.
Why this matters more in 2026
Ten years ago, the case for privacy-first software was mostly an argument about principle. Data collection felt abstract. The worst-case scenarios were hypothetical.
That framing is over.
Breaches are now routine. If an app has a database of user behaviour, that database will eventually leak, be subpoenaed, be sold in a bankruptcy, or be quietly licensed for AI training. Every dataset you don’t collect is a dataset that can’t be lost.
AI training changed the calculus too. Data a user shared in 2019 for a specific product feature may be training a foundation model in 2026. “We only use it to improve the product” no longer has a predictable time horizon.
Surveillance advertising is retreating at the platform level. Apple’s App Tracking Transparency, iOS privacy labels, the Digital Markets Act, and GDPR enforcement are squeezing the tracking economy from multiple angles. Software built without tracking is on the right side of every regulatory arrow we can see.
And users noticed. “Offline-only” and “no account required” are starting to show up in App Store review titles as positive differentiators. A decade ago, users who cared about this were a niche. They aren’t a niche anymore.
Building without tracking in 2016 was a moral stance that cost money. Building without tracking in 2026 is a moral stance that is also, increasingly, a commercial one.
The cost of not tracking
Not collecting data on your users has a real cost, and anyone pretending otherwise is selling something.
We cannot see which features people actually use. We cannot measure retention. We cannot tell whether a redesign helped or hurt. When we ship a change, we find out it was wrong from an email two weeks later, or a one-star app review. We have shipped features that looked important in meetings and turned out to be ignored, and we did not notice for months.
This is the part of privacy-first that does not show up on the marketing page. You are building with one eye closed. Competitors who track their users have a tighter feedback loop and, in a lot of cases, will ship better-tuned products faster than we will. That is a real disadvantage, and it is not going away.
We accept it as a trade. Every tracking SDK you add is effectively a promise that you will keep your users’ data safe forever, and that is a promise no software company can actually keep. Breaches happen. Companies get bought. Laws change. The datasets you never collect are the ones nobody can ever lose. We would rather ship a slightly less polished product than sign that promise.
One practical consequence: direct feedback matters more to us than it does to a company with a live analytics dashboard. A clear email from a single user is often the only signal we get that a feature needs rethinking, so we read each one carefully and take them seriously. We are also working on a feedback channel built directly into the apps themselves, so that anyone can tell us what is broken or missing in a couple of taps, without needing to open an email client or chase us down on a social network.
The short version
“We don’t track you” is easy to say and expensive to mean. The only test that really works is the old one: for every feature, ask whether it requires knowing something about the user that you do not actually need. When the answer is yes, find another way or cut the feature. That is almost never a technical problem. It is a priority problem. Most teams say privacy matters to them, and then ship the SDK anyway because the dashboard looks nice.
The good news is that once you draw the line and defend it for a while, the other decisions get easier. You stop missing the features you said no to. Users figure out who you are.