Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge. Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. But we cannot accept the solutions you are attempting to impose.
— John Perry Barlow
The first time I heard the phrase “user sovereignty” was while working at Mozilla on the Firefox web browser. Firefox ostensibly follows user sovereign design principles and respects its users. Mozilla has even baked it into their list of design principles on page 5 of the Firefox Design Values Booklet. But what does “user sovereignty” actually mean and what are the principles that define user-sovereign design?
The earliest discussion of the phrase I could find is a blog post from August 4th, 2011 by the “Chief Lizard Wrangler” herself, Mitchell Baker the CEO of Mozilla. In it she prophetically describes user sovereignty as the consequence of new “engines” that are “…open, open-source, interoperable, public-benefit, standards-based, platforms…” She also makes the critical link between the philosophy of openness and standards-based interoperability with that of identity management and personal data dominion:
Where is the open source, standards-based platform for universally accessible, decentralized, customized identity on the web? Today there isn’t one… Where is the open source, standards-based engine for universally accessible, decentralized, customized, user-controlled management of personal information I create about myself? Today there isn’t one.
Looking back from 2020, so much of what Mitchell said is correct that I consider her post a founding document of the user-sovereign design movement. But Mitchell wasn’t the only one. Mozilla was a hotbed for this line of thinking in 2011 and for a few years that followed. Ben Adida, Mozilla’s Identity Lead back then, also posted on more technical aspects of user sovereignty.
In his post from January 13th, 2012, Ben outlines three, then new, Mozilla products designed in the spirit of user sovereignty: decentralized identity (BrowserID), mobile web-based OS (FirefoxOS), and an app store (progressive web apps). All three of which failed in the market because I don’t think they went far enough towards user sovereignty.
I can’t fault the Mozilla leadership for not knowing, back in 2011, what we know now. Back then, Facebook, Twitter and YouTube were still considered benevolent wonders of the modern internet world. It wasn’t until several years later when the social media (socmed) platforms shifted their focus to global user surveillance and facilitating political manipulation that technologists fully grasped the societal implications and dangers they embodied.
By 2017, the dominance of the socmed platforms was ubiquitous and global. Leaders at Facebook and others began policing content and taking us all down the slippery slope of choosing who gets to speak and who doesn’t. Today, the socmed platforms have so much data and power that they act like digital feudal aristocrats that own the land, the market, and the people. They make diktats that affect people in their real lives. They can also sway elections one way or another giving them the ability to hold onto power by backing politicians that support their corporate interests or their personal political bias.
We’re no longer surprised that saying the wrong thing online can cost you your job, but what about losing your bank account? Or how about YouTube content creators being demonetization or outright banned, costing them their livelihood? The worse part is the lack of a balance in usage terms between users and the platforms and the complete lack of power for users to challenge this kind of policing. If YouTube bans your account you have very little opportunity to appeal their decision and no power to force them to reinstate you. There is a complete lack of transparent due process.
Popular psychologist Jordan Peterson recently described his experience of being locked out of his Google/YouTube account.
The interesting thing about Mr. Peterson’s situation is that he forced reinstatement by organizing what can best be described as a peasant revolt against the aristocrats at Google. As he tells it, he tried to warn the Google account management people that banning him “might not be a good idea.” When Google refused to reconsider, he contacted a number of prominent journalists and tweeted to his 1.4 million followers. A few hours later his account was reactivated and he suspects it was the publicity.
This outcome may give some people hope that there is a balance of power between the users and the socmed platforms. Unfortunately there isn’t. Or at least, what little there is can only be leveraged by famous people with large internet followings that share their animosity for the digital aristocracy. I’m sure Joe Rogan, Sam Harris, Jordan Peterson and others like them can use the mob to protect their user sovereignty but the other 4.5 billion internet users cannot.
The socmed platforms like Facebook, Twitter, and YouTube are important in that they demonstrate exactly what an internet system with little-to-no user sovereignty looks like. Users of those systems are faced with choosing between giving up all sovereignty or not using the system at all. They have no ability to be private and all access is gated on revealing your real-world identity. The platforms do use open and standard protocols for communication (e.g. HTTPS, TLS, etc) but the data you upload is not retrievable in a portable, standard format and the relationships the data encodes is only valid in the context of the socmed platform it came from. What good is the ability to download our data if we have to leave our friends and followers behind when we leave the platform?
Users are only granted limited rights under consumer protections laws like the GDPR (in Europe) and the CCPA (in California), all other power rests with the platforms. As I see it, legislative protections like the GDPR and CCPA are like the Magna Carta in that they just formalize the relationship between the tech oligarchs at Facebook, Twitter, and YouTube and the peasants that use those platforms. The theory of user sovereignty is much more akin to the United States Declaration of Independence in that it completely rejects the existing order and power structure on the internet.
These platforms are of and by the web. They cannot exist in any other context and the web is so biased toward centralization and user subjugation that the socmed companies appear to be reaching their limits; societal and political limits, not technical ones. Having the CEOs of top companies dragged before Congress multiple times per year is a sign of a failing industry. The last companies to receive such treatment were all in the tobacco industry. It’s time go deeper “into the stack” and rethink the underlying design principles before we build anything new.
If Facebook, Twitter and YouTube define one end of the spectrum of user sovereignty, what does the other end of the spectrum look like? How would a system designed to be fully user sovereign function? Before we can answer that question we must decide what the principles of user sovereignty are. I think they are easy to figure out just by thinking of the opposite of how socmed platforms work.
There are just six principles that, when followed, produce a fully user-sovereign system design:
Privacy for users is about giving them full control over how — and if — they are correlated across time and space. Correlation is the ability to identify the same user over subsequent connections (time) and even from different IP addresses (space). Correlation is the foundation of all user tracking and the primary way in which our privacy is violated when we use the web. It is also the basis of the entire surveillance capitalism economy.
A fully user-sovereign system does not keep logs and does not attempt to track users in any way. The system has no correlation ability. Users decide what level of correlation they are comfortable with. Users may share data with systems and allow correlation to take advantage of any bespoke services provided. However, the user retains full control over the correlation and may modify, and even terminate it, at any point in the future.
Preserving the users’ ability to come and go without notice is crucial because without privacy users lose much of their leverage in a world dominated by surveillance capitalism. All of the new artificial intelligent (AI) systems rely entirely on access to large amounts of data; data sometimes gathered by tracking and spying on you. The best way to actively undermine the construction of AI-powered systems is to starve them of your data. Privacy in a user-sovereign system begins as absolute and the user chooses whenever and however that privacy is reduced.
What is pseudonymity and how is it different than anonymity? Anonymity is the property of having no name. Pseudonymity is when we use a name that is not tied directly to our real identities such as nicknames. Names in digital systems are not the same as names in the real world. They can be IP addresses, random numbers, and even public cryptography keys; all are examples of pseudonyms. Digital systems use pseudonyms because without them no reply packet could ever be sent, let alone be routed to the recipient and received.
Pseudonymity gives users control over what degree the system and other users know them. Some users can appear to be a first-time user of the system, every time, by using a one-time pseudonym for every interaction with a system. On the other hand, the user may also choose to present full know-your-customer (KYC) credentials that reveal their real-world identity in a cryptographically verifiable way. Of course, anything in between is possible as well. A user may wish to use a pseudonym in one discussion forum so they may know them by that name and at other times wish to use one-off pseudonyms in another discussion form to speak their mind about a controversial topic without any real-world consequences.
Another key aspect of pseudonymity is network level tracking. User sovereign systems only operate on privacy-protecting internet platforms like the Tor network. Operating as a Tor hidden service or some other IP masked service not only maximizes user pseudonymity but also the pseudonymity of the operator of the service. Eventually, the user sovereign internet will require a ubiquitous and pervasive mix net transport layer that is used by everyone for all internet communications.
Encryption forms the backbone of all user-sovereign design. It must always be used to protect data in motion as well as data at rest from unauthorized observation of personal data and internet usage. Without it, users cannot enforce their privacy and pseudonymity. They cannot use verifiable credentials and zero-knowledge proofs for authorization. Without encryption users lose all leverage and have no sovereignty on the internet.
Governments around the world continue to fight legal battles to limit or ban the use of strong encryption. Just like guns in the hands of citizens, strong encryption represents a real threat to the power of any government. It is the only way we will keep the internet from becoming a global full-spectrum surveillance tool that tracks our every move. It is the only way we will avoid social credit systems manipulating us into becoming livestock held captive in regional people farms that are self-enforced by our fear of social consequences.
A large part of the balance of power between users and systems is a user’s ability to take their data and go to another competing service. Just like free-market price competition puts downward pressure on prices, user and data mobility creates pressure towards more user sovereignty in online systems. One such system that follows this principle is email. It is possible to download your email from one service using the standard IMAP protocol and store them into a standard .mbox file and your contacts into a standard .vcard file and then upload them to another service. This portability has made email an almost completely free and one of the most user-sovereign services on the internet.
User sovereign systems are entirely constructed using standard protocols and formats to maximize user portability and their sovereignty over their data. Because so few systems truly follow this principle, we may need to define a lot more new protocols and data formats to meet the requirements of old systems being recreated in a user sovereign way.
One of the more recent improvements in user sovereign technology is the creation of decentralized, blockchain backed, verifiable credentials (VCs) and proofs.
Developed beginning in 2013, VCs allow digital systems to shift away from identity based authorization — such as access control lists (ACLs) — to more decentralized capability based authorization. It is now possible to build systems that care about what you are instead of who you are. As my friend Timothy Ruff likes to say:
I only care that the pilot is properly trained and licensed to fly the plane. I do not care what their name is.
Moving away from ACLs means that systems can have proper and strong authorization while also allowing for fully private and pseudonymous users. As long as the credential presentation uses zero-knowledge proofs the user cannot be correlated and tracked while they use authenticated services.
Inevitably, internet services have tacit agreements between users and the system as well as legal terms of service. User sovereign systems use balanced terms of service to even out the power dynamic. Users will already have a great deal of power from their ability to stay private, pseudonymous, and portable, but to completed the balance the terms of service need to also include users’ terms.
The GDPR and CCPA are governmental attempts to balance the power of users and systems but there are so many loopholes that most internet services just throw up an interstitial EULA that everybody clicks to accept without fully understanding them. Not so on user sovereign systems. Users won’t be giving up their information by default like they do on the web today. They won’t be using software such as web browsers that can’t help but track you and they won’t be blindly clicking through EULA’s to get at content.
The six principles of user sovereignty are important because it gives us a moral framework within which we can make engineering and design decisions. Without having these values, how do systems designers choose one solution over another when the solutions function the same? Why should we choose verifiable credentials over a real name and password for authorization? Because one respects the user and their sovereignty and the other doesn’t.
It is sometimes hard to think of a world where users have agency on the internet because we were all conditioned to accept the status quo as the technology developed over decades. It isn’t the fault of past systems designers that the world is the way it is. Often, time and money pressures made them choose the quickest and easiest path without really thinking about the long term implications. Even if they did take time to consider the trade-offs, they didn’t have any coherent set of principles to inform their decision making. So the first engineers of Twitter and Facebook and YouTube didn’t know that their creations would one day be used to manipulate and spy on people. But now we know.