The Privacy Dilemma
Software products face a fundamental tension between utility and privacy.
Given two of software’s essential functions are data storage and processing, software companies tend to become more valuable as they collect more data. In an ideal world, this value translates directly to customer value. But in reality, each piece of data we share has a chance of being used against our best interests, whether by a third-party or the company itself.
Our best answer to the privacy dilemma seems to be something like: “a good enough product outweighs the risk.” But it’s not a solution.
While GDPR and other privacy standards try their best, they’re too difficult to enforce: their governing bodies simply don’t have the resources.
Even if a company wanted to do the right thing by taking a state-of-the-art privacy approach, this incurs a significant development and operational cost, one which could easily prove fatal for all but the most stable enterprises.
Privacy is like a lot of social goods: valuable, easy to demand, but difficult to achieve organically. As with many social contracts, our best solution is the crude-but-effective policy of trusting proactively and punishing reactively.
When dreaming of software products, I’ve often gotten stuck on the privacy dilemma.
The big question is whether privacy is solvable: is it more like a law of physics, or is it just a puzzle we haven’t solved yet?