This is the longer version of a shorter note posted on Brown Rudnick’s website.
Most consumer Internet businesses, including Web3 businesses, are, at their core, publishing businesses. Some publishing businesses are like the New York Times which commission, post, and host content which they have created themselves. Most Internet businesses, however, ingest content originating from somewhere, or more often someone, else, such as user-generated content like marketplace listings, social media posts and videos, or blockchain transaction data, and republish it under their own domains under license.
With success, comes legal issues, and principal among these is “content moderation” or “trust and safety,” industry terms meaning “the censorship and removal of undesirable content.” Moderation takes place for a variety of reasons including user demands, advertiser demands, or government demands. What demands come to a business, and how businesses choose to respond to them, will vary widely from one platform to the next – one would not expect to encounter large volumes of hate speech on a marketplace or personals app, for example, nor would one usually expect to see personals ads on apps which focus on news.
As blockchain-based applications begin to move into areas formerly exclusively occupied by Web 2 – Friend.Tech, for example, trying to break into social media – the censorship question rears its ugly head in novel and vexing ways. This is because of a simple, but fundamental, way in which most blockchain databases differ from SQL-style databases used by existing Web 2 incumbents: most internet businesses don’t require agreement on permanent, uncensorable, and immutable global state. Blockchains do.
Decentralized technologies like Bitcoin are designed to render censorship or deletion virtually impossible. How then, do you address the need for censorship and deletion on the one hand while integrating blockchain technology on the other? Particularly with something like, say, a decentralized version of Twitter, how do you square the fact that blockchains might be used with an objective of undermining censorship laws in repressive jurisdictions like Russia without complying with the much more limited but nonetheless very binding censorship requirements in jurisdictions like England or the United States?
The answer will depend in large part on what the developer of the application is trying to design for. We must assume that a network which allows all lawful speech to be released to the world free from censorship will have the exact same design characteristics, in terms of censorship resistance against third parties, as one which allows all unlawful speech. Code on the blockchain doesn’t know the difference between these two categories, and in either case, one user should not be able, in a non-nerfed blockchain system, to shut down or censor any other user.
Censorship resistance against third parties, however, does not require censorship resistance against oneself. Most of the time, blockchains are not used by themselves to host and serve an entire DeFi or Web 3 application. More often, they are linked to hosted user interfaces and third party datastores, whether something centralized like an S3 bucket on Amazon, or whether decentralized like a content-addressable system such as Bittorrent or IPFS. (Blockchains hosting raw image or video data is exceedingly rare, and uneconomical, due to blocksize constraints and the existence of fee markets for large, in data terms, transactions.)
It is at these visual layers, rather than through running a full node and interacting with the chain in the command line, where most users experience blockchain tech. YouTube competitor LBRY, for example, offered an uncensorable blockchain which acted as a registry of sorts containing pointers to digital IDs and content, and a website, LBRY.com, which hosted and displayed content linked to those IDs. If a user chose to violate LBRY.com’s terms of service, the blockchain ID or URLs could be deindexed from the LBRY.com site on the surface web, rendering them inaccessible to anyone who either didn’t know where to look on the chain or wasn’t willing to run a node themselves or reimplement the LBRY.com application on their own – which practically nobody was. Other early storage systems like Sia bifurcated their protocols in two, splitting into a paid service (Sia Pro) and an unregulated, free service (Sia Sky) utilizing separate domains, with the paid service playing by the rules and the unregulated service remaining unregulated. Users chose the experience which made the most sense to them.
Ultimately, the “decentralized” solutions we’ve seen to date tend to use blockchains to ensure only that text, links, and identity are uncensorable, with heavy penalties for putting plaintext on the chain and contributing to blockchain bloat. As a result, identity is usually the one thing app devs in the Web3 space will always delegate to the chain, with a variety of alternative approaches being available for texts and links. Video content and links are overwhelmingly hosted in the cloud, not on the chain, meaning that users who don’t want to see objectionable content on the chain should be able to deindex it either via block lists or blacklists.
At the end of the day, the most important thing to remember about the internet of publications integrating uncensorable blockchain technology is this: “censorship resistance against the world” doesn’t necessarily mean “censorship resistance against yourself.” Developers looking to address the content moderation problem in a blockchain-enabled application should therefore keep the following in mind:
- Legal compliance to address the unlawful/undesirable content problem starts with application design. You only get the chance to hash a genesis block once, and remember that any changes you want to make to the protocol might require a hard fork at a later date.
- Users will expect and demand the ability to control their own experiences on the Internet.
- Because blockchains scale poorly, as a general rule app developers should ask the blockchain to handle the bare minimum content possible, ideally limiting themselves to IDs, “money” (i.e. the native cryptocurrency) and any smart contract transaction logic required to effectively use “money.”
- Content BLOBs will, in practically every case, be pushed out to the cloud – whether to third-party servers or be self-hosted by posters, user interfaces should have the ability to deindex user IDs which violate their terms of service or the law. Storing raw user-generated content and metadata onchain – as a system like DeSo does – presents an enormous compliance problem for node operators as well as a bloat problem, and is best avoided.
- The non-blockchain components of a blockchain-enhanced application will likely need to have a range of tools available to control the user experience. Site admins for a website which operates in tandem with the blockchain will need all of the usual tools to take down unlawful content and pull subscriber records in response to law enforcement queries like any other Web2 application.
- Unless the blockchain component of a Web3 app is completely nerfed, users trading in objectionable speech in applications designed in the manner described above would still have the ability to talk to the blockchain, by running a node and communicating with the chain through the command line if not through hosted UIs. In transactions directly with the chain they should be able to verify their identities.
Ultimately, the (activist-group-popularized, and also, legally, correct) maxim “freedom of speech is not the same as freedom of reach” is instructive. Hosted UIs are regulated just like any other website and will need to be managed in a very conventional fashion. If a user’s speech is so objectionable that their blockchain ID is deindexed or blacklisted by the most popular user interfaces, if those controversial users want to be heard they will have find someone willing to host them or, in extremis, they will have to host themselves.