Listen to the whole thing.
Fun bits from yours truly at 28:20-32:00 and 40:30 – 43:00.
Listen to the whole thing.
Fun bits from yours truly at 28:20-32:00 and 40:30 – 43:00.
I’m not your lawyer and the below is not legal advice. Note disclaimer.
In the wake of the El Paso and Dayton shootings and the subsequent deplatforming of the 8Chan imageboard, Section 230 of the Communications Decency Act (47 U.S.C. § 230) has become one of the most-discussed, most-misinterpreted, and in terms of its practical effects on day to day American life, most poorly-understood legislative provisions in American public discourse.
If ever you hear a conservative politician talking about Twitter as a “publisher and not a platform,” such as Ted Cruz or Josh Hawley, you’re listening to a viewpoint that doesn’t reflect Section 230 at all. If in the New York Times you read that the importance of Section 230 is that “sites can moderate content — set their own rules for what is and what is not allowed — without being liable for everything posted by visitors,” and leaves it at that, you’re reading a viewpoint that understands what Section 230 says, but doesn’t understand what it does.
Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.
…which leads Mr. Thompson to the conclusion that:
This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.
…you may safely conclude that what you are reading is flat wrong. This viewpoint belongs to someone who wishes to make a trendy or clever point, having just read the Wikipedia article on the subject. In fact, Section 230 does shield platforms from the responsibility to moderate content which is legal in the United States. It was expressly intended to do so.
As with many things, the problem with Section 230 as an object of public discussion – as with gun control – is that the vast majority of people on either “side” of the discussion have absolutely no idea what they’re talking about. Most people clearly haven’t bothered to read it. Of those who have, a minority have a legal education. Most people have never worked with it in a professional setting, let alone invoked it on behalf of a client.
I have. I understand that Section 230 is one of the most powerful pro-freedom, pro-free markets, pro-American, anti-government overreach laws in existence. If Section 230 is neutered, American online life will change beyond recognition.
Section 230 protects American companies, and by extension all American citizens who benefit from the services those American companies provide, by conferring broad immunity from frivolous lawsuits and government interference.
It does this simply, deliberately, and effectively. It’s not rocket science, and we’re going to walk through what the key provisions say and how they work, line by line, below.
Contrary to what you read in the news, Section 230(c)(1) has absolutely nothing to do with content moderation. It has nothing to do with “platforms acting as publishers.” It has to do with how we treat user-generated content on a “platform” (properly, an “interactive computer service”).
It reads as follows:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
To understand what this means we need to turn to the definitions section in Section 230(c)(f), which says
Thus, for most people’s everyday browsing experience, in plain English this section means:
Platforms and users are not liable for content created by someone else.
This makes a lot of sense, if you think about it. “Not my circus, not my monkeys” is a bedrock principle of liability in the English common law tradition. Section 230 extends this principle to internet-based communications.
The law is simple, and clear.
That has to be it, because I can’t believe that guys as clever as Ted Cruz or Josh Hawley, both attorneys, both really good attorneys, are that misinformed. The idea that, per Senator Hawley,
With Section 230, tech companies get a sweetheart deal that no other industry enjoys: complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship.
…doesn’t appear anywhere in Section 230(c)(1). Or anywhere in the entire Communications Decency Act, for that matter. This is because the Communications Decency Act was designed to promote the development of the internet “unfettered from Federal or State regulation,” (see s. 230(b) – more on this below) and was not a qualified, transactional kind of law where tech companies received an immunity in exchange for providing unbiased political forums.
It is a testament to how truly awful politics is, that politicians have managed to so thoroughly confuse the public about a legal command which is only slightly more complicated than a stop sign. Apart from the fact that an obligation of impartiality is not mentioned anywhere in Section 230, Section 230(c)(1) deals with allocation of liability for statements made on the Internet at a specific moment: at the moment they are made. 230(c)(1) focuses on a point in time where it is impossible for content moderation to have occurred.
Even if a given platform’s moderation were openly biased against one particular viewpoint, even if the entire world used that platform, even if that platform advertised itself as “the world’s public square for free speech,” and even if that claim weren’t true, all of those questions are conditions subsequent that depend on the statement having been made in the first place. Whatever platforms might do or refrain from doing after that point, when moderating statements that have already been made, is, from a public policy standpoint, irrelevant to the question of how liability should be apportioned at the moment of genesis.
Section 230(c)(1) is therefore engaged any time any user creates any content on any platform that is subject to the jurisdiction of the United States. It sets out the simple principle that user generated content is the problem of the user who created it, and no one else’s.
There are, of course, certain very limited carveouts specific to federal criminal law e.g. illegal pornography or sex trafficking (and note, the sex trafficking provision, known as FOSTA/SESTA, is currently being very credibly challenged on First Amendment grounds in the courts) which do create an obligation to remove certain types of unlawful speech after it has been posted. But this isn’t moderation as much as it is prohibition, and these carve-outs aren’t what the politicians are talking about when they talk about Section 230 reform.
Section 230(c)(2) is an entirely separate provision. It has no effect whatsoever on the rights granted by 230(c)(1). Section 230(c)(2) confers immunity for moderation activity for platforms that choose to moderate. It reads as follows:
No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [sub-]paragraph ([A]).
In plain English:
If a web app moderates any content off of their platform, and you sue them for it, you’re going to lose.
This statutory provision isn’t actually necessary for platforms to cover their asses vis-a-vis their users. This is because, each and every day, users consent to tech platforms’ contractual terms of service, and in so doing the users contract out of any remedies they may have for discretionary content moderation activity undertaken by those platforms. In plain English, when you use a platform like Twitter and Facebook, you agree to let them moderate your content. With this in mind, why do we need Section 230(c)(2) in the first place? Why not just let the market sort it out and let platforms compete on terms?
Well, this provision, and indeed all of Section 230, is expressed to supersede any contrary state law (see s. 230(e)(3)). This section therefore is best understood as being about preventing state regulators from creating state-level rules that could impose liability for content moderation, e.g. if Texas created a cause of action for moderating climate change skepticism and California created a cause of action for moderating climate change activism.
That such a provision can also be raised as an affirmative defense to any action challenging good-faith content moderation is a bonus, a nice-to-have. But it isn’t really why it’s there.
Accordingly, it would be less than correct to say that this rule is about encouraging companies to moderate “hate speech” (a term that was barely used in public discourse in the early 1990s) or any other type of objectionable content. Contract law could have handled (and does handle) that just fine. It would be more correct to say that this rule acts as a backstop that prohibits local interference with an interstate system.
More on that below.
When we read something from Mr. Thompson and numerous other commentators that
it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal
…that’s just not true. The statute says nothing of the sort.
The Federal Government is constitutionally barred by the First Amendment from regulating protected speech (See: “Congress shall make no law… abridging the freedom of speech,” U.S. Const., Amend. I). Accordingly, the regulation of “hate speech” and other forms of highly objectionable content, such as that complained of by the Times or Mr. Thompson, has been outside of province of Congress for 228 years. It cannot have been intended to be regulated by the Communications Decency Act or any other statute.
The intent of Congress is that companies who operate legal businesses are immune from liability arising from user-generated content, subject to the few aforementioned carve-outs under federal law. Period. They’re also immune from the (civil) consequences of good-faith moderation of objectionable content. Period.
Section 230 is a one-way street: “these are the rules, because the unrestricted development of the Internet is good for America.” Congress goes out of its way to tell us that this is what it intended. To do this we need to navigate away from Wikipedia, look up the statute and scroll up one sub-section to the (legally non-binding) preamble, Section 230(b), in which Congress says:
It is the policy of the United States (1) to promote the continued development of the Internet and other interactive computer services and other interactive media; (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation; [and] (3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other Interactive computer services
In plain English:
Note that “providing an unbiased forum free of political censorship” is not among the United States’ stated reasons for enacting Section 230.
In the wake of a tragedy like El Paso or Christchurch there is a temptation to recommend regulating online speech. I wrote about this at length in a blog post I published after the Christchurch shooting, Open Access Publishing Platforms and Unlawful Threats. Summarizing that post in a line, calls to ban online speech in response to a tragedy are feel-good solutions that don’t actually make anyone safer.
First, banning a user from a website doesn’t ban the person from existence. They’re still very much there, and there are many darknet avenues they can use to communicate with like-minded people.
Second is the lesson that I was taught as a four-year-old, but which most journalists these days seem to have a hard time grasping:
Sticks and stones will break my bones, but words will never hurt me.
A post on 8Chan isn’t going to kill anyone. It can’t even scratch anyone. It can’t do anything to anyone else but offend them. What it does, however, every single time, is identify the speaker. And if that speech is threatening and/or illegal, it gives law enforcement probable cause to serve a platform such as 8Chan with a search warrant or emergency disclosure request. Just as the publication of Unabomber Ted Kaczynzki’s manifesto in the New York Times led to his identification by his own brother thanks to the presence of telltale prose, a user on 8Chan will generate all manner of personally identifying data, including e.g. an IP address, which law enforcement can use to try to intervene in a tragedy. If the most extreme users migrate to ZeroNet, as indeed some have, this ability all but disappears.
Saying “we should ban 8Chan” is the digital equivalent of burying your head in the sand. What will happen, will happen. If you ban the site, you just will remain blissfully unaware of it until after the fact.
I’ll write this story in full another day. Suffice it to say, perfectly legal user generated content can be a pain in the ass, and people and governments who don’t like and complain about perfectly legal user generated content are an even bigger pain in the ass, for any internet business.
Section 230 makes it very easy for a business to focus on its business and tell the hordes of basement dwelling complainers from around the world to get lost. Countries like the UK, France, and Australia empower the complainers by imposing liability on service providers for failing to remove content which is defamatory, or which the governments, as informed by “concerned citizens,” of these countries find objectionable.
The American answer to these issues is (in the case of defamation) to file a lawsuit against the user who generated the content, or (in the case of objectionable but legal content) do nothing at all, or leave the discretion to moderate in the hands of the company providing the service. If we ditch Section 230, all that changes. The American answer, and the First Amendment, will cease to be relevant. Litigants and foreign actors will have considerably greater power to use lawfare to shut down American venues that host speech that these private or foreign persons disagree with, including political speech.
Foreign governments adopt approaches to speech regulation which are subjective and, if enacted in the U.S., would be both unconstitutional in their aim and unconstitutionally vague.
The U.S. approach, and the Section 230 shield that creates it, is the last defense for freedom online and should not be amended. Make no mistake, the intent of Congress in passing this law was to protect online business from vexatious litigation and government interference of all kinds, foreign and domestic.
We already have myriad legal solutions for truly egregious conduct online. If speech on an open-access publishing platform is criminal, law enforcement needs partners who are able to respond to search warrants, and the Stored Communications Act gives LEAs the legal framework in which to obtain them. If speech is not criminal but privately damaging, if it’s really that damaging, there should be a private cause of action against the user who created the content and that action should be brought against that user, and that user alone.
If it’s neither criminal nor tortious, really, who cares? There are 7+ billion people on the planet. We can’t silence everyone who disagrees with us. Nor should we want to try.
Just stumbled across this outstanding conference call from Allen & Overy last Friday. A must-listen:
“The EU is a very large law project. It has involved the creation of law, it has involved the harmonisation of law. It is my strong conviction that the legislation that will be needed to implement this decision, it is very important for this country that that legislation is cool, measured and rational. And that it does not bear the mark of rancour.”
“This is like a demerger. The biggest in history.”
Just jotting a few thoughts down for my regular readers, to be expanded upon when time allows. There’s so much to unpack:
Remain campaigners all said Brexit was a terrible idea. Within 12 hours of the result being announced:
You people were warned.
But it ain’t over till it’s over, and things have barely just begun. Being an EU citizen (in addition to being an American) I was firmly in the “Remain” camp. I believe in a united Europe and do not expect this referendum will be the last word on the matter.
Some Labour MPs have come out directly against the referendum and said they will attempt to use a majority in Parliament to block Brexit from taking place – since the referendum is not legally binding (and Parliament’s sovereignty is supreme). Other political parties, such as the Lib Dems, have rejected the referendum result and already said they will contest the next general election on a European unity platform:
Them’s fightin’ words. Speaking of which, the question of what the absolute worst-case scenario could be if the referendum result is ignored is already leading folks to some pretty dark conclusions:
Is what I’ll be getting behind in the meantime, as the Adam Smith Institute is. Dan Hannan is expressly pushing for this, Boris Johnson and Douglas Carswell have both hinted at it; Nigel Farage opposes it completely.
“Article 50” is a 250-word provision of TFEU which basically says “if a member state tells the Council it wants to quit, we’ve all got two years to sort it out and if we haven’t done it by then, ejection from the Union is automatic.”
Until the UK pulls the trigger on Article 50, legally, absolutely nothing about the country’s status in the EU has changed one whit.
The fact that Article 50 has not been invoked leads me to believe it may never be invoked and that the #brexit may simply be grounds for renegotiating the UK’s position or a gradual transition into the EEA/EFTA, what is known as a “soft Brexit.”
Not my idea, but rather it’s David Allen Green’s:
This interpretation makes a ton of sense given Cameron’s tone in the run-up to the election. Chiefly, in Cameron’s own words:
Then there is the legality. I want to spell out this point very carefully. If the British people vote to leave there is only one way to bring that about – and that is to trigger Article 50 of the Treaties and begin the process of exit.
And the British people would rightly expect that to start straight away.
The referendum happened; Article 50 was not triggered. Read into that what you will. Before you do, though, check out Green’s blog for more on this. Or this from the Guardian comments section:
Irrespective of the availability of the EEA Option I support calling a second Scottish independence referendum and will support the “Yes” campaign when that does inevitably occur.
I lived there for four years. It really is a different country. Their young people want it. Time to do it.
Being of Irish ancestry (two generations ago, Donegal), I support calling a border poll in Northern Ireland. If the Union is going to break up as a result of this, let’s pull the band-aid off in one go.
I support #LondonIndependence / #Londependence in the event the EEA option is rejected.
I would also not object to places like New York City, San Francisco, Los Angeles, and Chicago, which differ radically from their hinterlands, being split off into politically distinct sub-units in the United States.
This idea is currently in vogue among the tech set. Meaning it’ll be in every household within 36 months.
This is going to make for some very extremely interesting techno-political-legal blogging over the next 2 (more likely 5) years; all of the above involves planning the re-write of, and actually fucking re-writing, substantial portions of the English legal system.
I’m giddy. It is not possible for there to be a more exciting time to be an English lawyer.