
In the last week or so, a number of would-be and actual criminals have abused open-access social media to disseminate their messages or issue terroristic threats, in particular on the 4Chan and 8Chan imageboards.
Around the world, the response from non-U.S. governments has been to censor sites with a free speech emphasis – 8Chan, 4Chan and ZeroHedge (but notably, not Facebook) have been banned by New Zealand and Australian ISPs. Canada joined them in calling for increased regulation. For the better part of the current Parliament, Britain has been considering the Online Forums Bill which would make platform providers liable for the content posted by their users. What all these solutions share is a common emphasis of making communications platforms responsible for the content shared thereon, and the creation of automated censorship programs at each of these companies to prevent any possibly offensive content from making it from draft form to publication.
In the U.S., by contrast, the First Amendment to the U.S. Constitution prohibits such prior restraints on speech, and Section 230 of the Communications Decency Act renders social media platform providers more or less absolutely immune from the speech torts or speech crimes of their users. Accordingly, because there can be no legal government response to “hate speech” online, the response from political activists in the U.S. has been to seek to convert ordinary folks to a pro-censorship position.
Usually, this advocacy takes the form of proposing to ban the social media companies whose platforms were used to issue the threatening language. Take, for example, this blisteringly hot take from New York Times Opinion writer Charlie Warzel:
These takes are very poor, and show that those who write them don’t actually understand the problems with data protection, data disclosure, and social networks. And in a few headings I’m going to explain why.
1. Posting threats is already illegal, and banned on most (if not all) social media platforms
I’m admitted to practice in two countries. In both of them, posting a terroristic threat on the Internet will result in the speaker being arrested. I know of no website which says it wants this material.
2. The wheels of justice turn slowly, but exceedingly fine
Western society is not a panopticon, at least not yet, so we do not have the ability to intercept a message midflight, semantically analyze it and immediately have men with guns and black helicopters converge on the user who posted it.
Nor should we wish to have that ability. Critics’ emphasis on First Amendment-type issues, as many commentators (and indeed countries) have in the wake of these incidents, ignores the fact that to stop an unlawful post from finding its way to the surface, either (a) all open-access publishing platforms would need to be eliminated and every single post would need to be monitored and assessed for legal compliance before going live, or (b) you would need to effectively brainwash every Internet user in the world so that none of them were ever inclined to post a threat on the Internet.
An early-intervention approach to speech would be incompatible with not just conceptions of free speech, but also the broad social consensus of how our system of laws is meant to work. Our laws serve a deterrent function: if you post violent threats online, then you are liable to be sentenced to a lengthy term of imprisonment. This approach to controlling and punishing speech places the onus of legal compliance on an internet user, rather than the platforms they use, and maximizes the freedom of compliant internet users to say what they like, at least insofar as those users stay within well-defined legal boundaries.
The downside to the deterrent approach is that human beings have a very warped conception of time-preferences, i.e., they lack the discipline to understand that consequences in the future will arise from being naughty in the present. Time preference issues are why, for example, I just ate a bagel for lunch when I know I need to lose ten pounds, or why, in a more extreme example, people commit violent crime to settle a street vendetta without considering that doing so can incur a lengthy term of incarceration.
This is also why, at least to the extent I have considered this issue, people post violent threats online. There’s some itch that these people need to scratch, psychologically, so they scratch it, consequences be damned.
Government responses in the aftermath of such an event will be deeply unsatisfactory to almost everyone involved, starting with the victims. Justice can be slow. But the delay between the issuance of a written threat and police response, given how our justice system works, is not so much a First Amendment issue as it is a Fourth Amendment and privacy rights issue. After all, as put in A Touch of Evil, “a policeman’s job is only easy in a police state.” After a threat is detected, most jurisdictions have procedures for pulling down user data from social media companies – in the case of the UK, for example, a RIPA request – and in the case of the US, a search warrant or subpoena issued pursuant to the federal Stored Communications Act.
Some of these statutes act to affirmatively protect user data; the Stored Communications Act, for example, prohibits companies from handing over user data to government agencies unless they’ve either been served with a court order, or believe in good faith that an emergency threatening serious bodily injury or life exists.
Recent events are not the first time that online imageboards have been used for this purpose. If we’re using 4Chan as an example, see e.g. the arrest and 2008 conviction of Jake Brahm for posting a deranged attempt at “copypasta,” a type of long-form text-based Internet meme, or more recently the 2018 arrest of an Indianapolis man for threatening to attack participants in an “alt-lite” free speech rally in Boston in 2017.
3. “Censorship” is a lazy default answer for writers who don’t know how cross-border data protection issues actually work
Twitter is not the world.
When you ban someone from a social network, they do not disappear. When you censor ideas from a social network, they are not forgotten.
Legislative proposals that put censorship front-and-center ignore the fact that there have always been, and there will always be, high time-preference individuals living among us who are willing to break our social compact irrespective of the long-term personal cost to themselves.
Every day, in every modern country, our fellow citizens rob people at knifepoint/gunpoint in our cities. They steal. They beat their children. Our fellow citizens do horrible things, to each other and even to themselves, with the foreknowledge that doing those horrible things will likely result in legal sanction. Unlawful threats have been part of that equation since time immemorial. Banning publishing platforms for the sins of a few sporadic users would be like banning sidewalks because muggers rob people on sidewalks.
In terms of finding workable solutions, the answer is not to ban the publishing platforms but rather to make our law enforcement systems mutually intelligible. Such an approach will protect honest citizens and not erode civil liberties while also rendering law enforcement a more credible deterrent to violent online threats.
Currently, it difficult for U.S. companies to share stored communications with law enforcement agencies overseas. (The Stored Communications Act appears to prohibit the disclosure of the content of stored communications to foreign governments without a U.S. warrant.) Foreign law enforcement agencies seeking U.S.-based user data must go through a months-long Mutual Legal Assistance Treaty, or MLAT, procedure to ensure overseas requests comply with U.S. constitutional requirements around due process and freedom of speech.
Because the U.S.’ free speech rules are more pro-speaker than most if not all countries on the planet, overseas governments – particularly New Zealand, Australia, Britain, and Canada – are the loudest in calling for censorship of American-run web businesses. These calls for censorship strike me as a transparent attempt to do an end-run around U.S. privacy and free speech protections to exert an influence over the content and tone of the Internet, a global system that no country can, on its own, control.
How, then, do we empower law enforcement globally to combat online threats without using tech companies as de-facto content censors? The answer is not to take people’s rights to speak away from them. Rather, it’s to give people more free speech rights in places like Australia and New Zealand. Doing so would alleviate U.S. political concerns that, e.g., are likely to prevent the entry into an executive agreement for data sharing with third countries under the CLOUD Act, such as the proposed agreement with the UK, as the CLOUD Act requires data sharing arrangements between countries to “afford [as] robust substantive and procedural protections for privacy and civil liberties in light of the data collection and activities of the foreign government” as there would be domestically in the United States before .
In my professional opinion as a lawyer admitted in both England and the United States, the Commonwealth countries and the UK, which follow the UK’s constitutional pattern of constitutional monarchy and not the U.S.’ constitutional pattern of etching Enlightenment values in stone, do not adequately provide those protections.
By raising the baseline for free speech rights overseas, foreign governments would make it considerably easier to obtain information from U.S. companies to pursue the worst offenders. What they would give up, in exchange, is the ability to prosecute those who espouse uncomfortable but non-threatening wrongthink at home that is legal in the United States, such as the speech of Catholic journalist Caroline Farrow who was placed under investigation by the British police last week.
Entrusting our citizens with liberty, and making them face the consequences of violating that trust, is consistent with the English-speaking world’s most cherished political traditions. Granting Commonwealth citizens more rights is the prerequisite to enacting effective data sharing arrangements with the United States that will not face objections from U.S. citizens and will help to address modern online threats. Taking people’s rights away will result in further incompatibility between the U.S.’ and other countries’ legal systems, and deny law enforcement the ability to mount effective and rapid responses to cross-border criminality.
Such a solution doesn’t enjoy the best “woke” optics. But it benefits from the fact that it might actually work.