What Section 230 of the Communications Decency Act actually says

I’m not your lawyer and the below is not legal advice. Note disclaimer.

I wrote a follow-up to this piece on 29 May 2020, when the President of the United States called for the repeal of Section 230 on Twitter. Read the post here.

I also wrote a version of this post illustrating how Section 230 works by using stick figures in June 2020. See that here.

So far as the ordinary person is concerned, Section 230 of the Communications Decency Act essentially says two things:

  1. Online publishing platforms – personal websites, newspapers’ websites, e-mail list-servs, whatever – and their users are not legally liable for content they do not create. 
  2. If a web app moderates any of your content off of their platform, and you sue them for it, you’re going to lose.

Moreover, Section 230 is a one-way street. Tech companies – nor any other publisher that uses a web app that hosts user-generated content – do not promise to be neutral in exchange for these immunities. They get these immunities because Congress correctly surmised that the unrestricted development of the Internet was good for America.

Congress goes out of its way in the Communications Decency Act to tell us that this is what it intended in the Act. The intention of the Act is to keep government out of the business of moderating online content. Period.

Read on and I’m going to tell you what this means in greater detail.

In the wake of the El Paso and Dayton shootings and the subsequent deplatforming of the 8Chan imageboard, Section 230 of the Communications Decency Act (47 U.S.C. § 230) has become one of the most-discussed, most-misinterpreted, and in terms of its practical effects on day to day American life, most poorly-understood legislative provisions in American public discourse.

Moreover, Section 230 is a one-way street. It’s an unconditional provision that applies to everyone, whether publishers or platforms or bloggers or auto body shops, who has a presence on the Internet.

If ever you hear a conservative politician talking about Twitter as a “publisher and not a platform,” such as Ted Cruz or Josh Hawley, you’re listening to a viewpoint that doesn’t reflect Section 230 at all.

If in the New York Times you read that the importance of Section 230 is that “sites can moderate content — set their own rules for what is and what is not allowed — without being liable for everything posted by visitors,” and leaves it at that, you’re reading a viewpoint that understands what Section 230 says, but doesn’t really understand why it’s important.

If you read still other commentary, such as this blog post by Twitter user Ben Thompson, who writes:

Section 230 doesn’t shield platforms from the responsibility to moderate…

Actually… it sort of does.

This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.

Wrong.

This viewpoint belongs to someone who wishes to make a trendy or clever point, having just read the Wikipedia article on the subject.

As with many things, the problem with Section 230 as an object of public discussion – as with gun control – is that most people clearly haven’t bothered to read the rule, let alone understand it. Of those who have tried, a minority have a legal education. Of lawyers, only a small fraction of corporate technology bods and litigators will have worked with it in a professional setting. Still fewer will have had to grapple with it in a civil complaint.

I have. I understand that Section 230 is one of the most powerful pro-freedom, pro-free markets, pro-American, anti-government overreach laws in existence. If Section 230 is neutered, American online life will change beyond recognition.

Section 230 protects American companies, and by extension all American citizens who benefit from the services those American companies provide, by conferring broad immunity from frivolous lawsuits and government interference.

It does this simply, deliberately, and effectively. It’s not rocket science, and we’re going to walk through what the key provisions say and how they work, line by line, below.

What Section 230 of the Communications Decency Act actually says

1. 230(c)(1): “Platforms” are more or less absolutely immune from liability arising out of user-generated content…

Contrary to what you read in the news, Section 230(c)(1) has absolutely nothing to do with content moderation. It has nothing to do with “platforms acting as publishers.” It has to do with how we treat user-generated content on a “platform” (properly, an “interactive computer service”).

It reads as follows:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

To understand what this means we need to turn to the definitions section in Section 230(c)(f), which says

  • interactive computer service” means “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.”
  • In other words, for most people, a web app or other kind of internet communications platform.
  • information content provider” means “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”
  • In other words, a user who generates content.

Thus, for most people’s everyday browsing experience, in plain English this section means:

Platforms and users are not liable for content they do not create. 

This makes a lot of sense, if you think about it. “Not my circus, not my monkeys” is a bedrock principle of liability in the English common law tradition. Section 230 extends this principle to internet-based communications.

If you didn’t write/make/speak it, someone cannot sue you for it (and win).

The law is simple, and clear. 

…and the politicians are deliberately screwing up their explanations of it

That has to be it, because I can’t believe that guys as clever as Ted Cruz or Josh Hawley, both attorneys, both really good attorneys, are that misinformed. The idea that, per Senator Hawley,

With Section 230, tech companies get a sweetheart deal that no other industry enjoys: complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship.

…doesn’t appear anywhere in Section 230(c)(1). Or anywhere in the entire Communications Decency Act, for that matter. 

First, EVERYONE benefits from Section 230. If you wander onto the comment section below this post and decide you want to go insult some dude you knew in college, Section 230 says that the dude can’t sue me for what you said on my site. Same with local newspapers or any other website.

Second, the whole point of Section 230 is that it is designed to exempt the Internet from government regulation. They say it right in the Act: “unfettered from Federal or State regulation,” (see s. 230(b) – more on this below). Section 230 was not a qualified, transactional kind of law where tech companies received an immunity in exchange for providing unbiased political forums.

It is a testament to how truly awful politics is, that politicians have managed to so thoroughly confuse the public about a legal command which is only slightly more complicated than a stop sign. Apart from the fact that an obligation of impartiality is not mentioned anywhere in Section 230, Section 230(c)(1) deals with allocation of liability for statements made on the Internet at a specific moment: at the moment they are made. 230(c)(1) focuses on a point in time where it is impossible for content moderation to have occurred. 

Even if a given platform’s moderation were openly biased against one particular viewpoint, even if the entire world used that platform, even if that platform advertised itself as “the world’s public square for free speech,” and even if that claim weren’t true, all of those questions are conditions subsequent that depend on the statement having been made in the first place. Whatever platforms might do or refrain from doing after that point, when moderating statements that have already been made, is, from a public policy standpoint, irrelevant to the question of how liability should be apportioned at the moment of genesis.

Section 230(c)(1) is therefore engaged any time any user creates any content on any platform that is subject to the jurisdiction of the United States. It sets out the simple principle that user generated content is the problem of the user who created it, and no one else’s

There are, of course, certain very limited carveouts specific to federal criminal law e.g. illegal pornography or sex trafficking (and note, the sex trafficking provision, known as FOSTA/SESTA, is currently being very credibly challenged on First Amendment grounds in the courts) which do create an obligation to remove certain types of unlawful speech after it has been posted. But this isn’t moderation as much as it is prohibition, and these carve-outs aren’t what the politicians are talking about when they talk about Section 230 reform.

2. 230(c)(2): “Platforms” are immune with regard to good-faith moderation calls

Section 230(c)(2) is an entirely separate provision. It has no effect whatsoever on the rights granted by 230(c)(1). Section 230(c)(2) confers immunity for moderation activity for platforms that choose to moderate. It reads as follows:

No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [sub-]paragraph ([A]).

In plain English:

If a web app moderates any content off of their platform, and you sue them for it, you’re going to lose.

This statutory provision isn’t actually necessary for platforms to cover their asses vis-a-vis their users. This is because, each and every day, users consent to tech platforms’ contractual terms of service, and in so doing the users contract out of any remedies they may have for discretionary content moderation activity undertaken by those platforms. In plain English, when you use a platform like Twitter and Facebook, you agree to let them moderate your content. With this in mind, why do we need Section 230(c)(2) in the first place? Why not just let the market sort it out and let platforms compete on terms?

Well, this provision, and indeed all of Section 230, is expressed to supersede any contrary state law (see s. 230(e)(3)). This section therefore is best understood as being about preventing state regulators from creating state-level rules that could impose liability for content moderation, e.g. if Texas created a cause of action for moderating climate change skepticism and California created a cause of action for moderating climate change activism.

That such a provision can also be raised as an affirmative defense to any action challenging good-faith content moderation is a bonus, a nice-to-have. But it isn’t really why it’s there.

Accordingly, it would be less than correct to say that this rule is about encouraging companies to moderate “hate speech” (a term that was barely used in public discourse in the early 1990s) or any other type of objectionable content. Contract law could have handled (and does handle) that just fine. It would be more correct to say that this rule acts as a backstop that prohibits local interference with an interstate system.

More on that below.

3. The “will of Congress”

When we read something from Mr. Thompson and numerous other commentators that

it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal

…that’s just not true. The statute says nothing of the sort.

The Federal Government is constitutionally barred by the First Amendment from regulating protected speech (See: “Congress shall make no law… abridging the freedom of speech,” U.S. Const., Amend. I). Accordingly, the regulation of “hate speech” and other forms of highly objectionable content, such as that complained of by the Times or Mr. Thompson, has been outside of province of Congress for 228 years. Congressional regulation of “hate speech” is unconstitutional. An illegal purpose cannot have been intended with the passage of the Communications Decency Act or any other statute. 

The intent of Congress is that companies who operate legal businesses are immune from liability arising from publishing user-generated content, subject to the few aforementioned carve-outs under federal law. Period. They’re also immune from suit with regard to good-faith moderation of objectionable content on their sites. Period.

Section 230 is a one-way street: “these are the rules, because the unrestricted development of the Internet is good for America.” Congress goes out of its way to tell us that this is what it intended. To do this we need to navigate away from Wikipedia, look up the statute and scroll up one sub-section to the (legally non-binding) preamble, Section 230(b), in which Congress says:

It is the policy of the United States (1) to promote the continued development of the Internet and other interactive computer services and other interactive media; (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation; [and] (3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other Interactive computer services

In plain English:

  1. Internet good.
  2. Free markets good. Government regulation bad.
  3. States creating local rules that result in American domestic “splinternet” very bad.
  4. Users should be maximally in control of what they see.

Note that “providing an unbiased forum free of political censorship” is not among the United States’ stated reasons for enacting Section 230.

4. Proper avenues for content enforcement

In the wake of a tragedy like El Paso or Christchurch there is a temptation to recommend regulating online speech. I wrote about this at length in a blog post I published after the Christchurch shooting, Open Access Publishing Platforms and Unlawful Threats. Summarizing that post in a line, calls to ban online speech in response to a tragedy are feel-good solutions that don’t actually make anyone safer.

First, banning a user from a website doesn’t ban the person from existence. They’re still very much there, and there are many darknet avenues they can use to communicate with like-minded people.

Second is the lesson that I was taught as a four-year-old, but which most journalists these days seem to have a hard time grasping:

Sticks and stones will break my bones, but words will never hurt me.

A post on 8Chan isn’t going to kill anyone. It can’t even scratch anyone. It can’t do anything to anyone else but offend them. What it does, however, every single time, is identify the speaker. And if that speech is threatening and/or illegal, it gives law enforcement probable cause to serve a platform such as 8Chan with a search warrant or emergency disclosure request. Just as the publication of Unabomber Ted Kaczynzki’s manifesto in the New York Times led to his identification by his own brother thanks to the presence of telltale prose, a user on 8Chan will generate all manner of personally identifying data, including e.g. an IP address, which law enforcement can use to try to intervene in a tragedy. If the most extreme users migrate to ZeroNet, as indeed some have, this ability all but disappears.

Saying “we should ban 8Chan” is the digital equivalent of burying your head in the sand. What will happen, will happen. If you ban the site, you just will remain blissfully unaware of it until after the fact.

5.  Why do we want Section 230?

I’ll write this story in full another day. Suffice it to say, perfectly legal user generated content can be a pain in the ass, and people and governments who don’t like and complain about perfectly legal user generated content are an even bigger pain in the ass, for any internet business.

Section 230 makes it very easy for a business to focus on its business and tell the hordes of basement dwelling complainers from around the world to get lost. Countries like the UK, France, and Australia empower the complainers by imposing liability on service providers for failing to remove content which is defamatory, or which the governments, as informed by “concerned citizens,” of these countries find objectionable.

The American answer to these issues is (in the case of defamation) to file a lawsuit against the user who generated the content, or (in the case of objectionable but legal content) do nothing at all, or leave the discretion to moderate in the hands of the company providing the service. If we ditch Section 230, all that changes. The American answer, and the First Amendment, will cease to be relevant. Litigants and foreign actors will have considerably greater power to use lawfare to shut down American venues that host speech that these private or foreign persons disagree with, including political speech.

Foreign governments adopt approaches to speech regulation which are subjective and, if enacted in the U.S., would be both unconstitutional in their aim and unconstitutionally vague.

The U.S. approach, and the Section 230 shield that creates it, is the last defense for freedom online and should not be amended. Make no mistake, the intent of Congress in passing this law was to protect online business from vexatious litigation and government interference of all kinds, foreign and domestic.

We already have myriad legal solutions for truly egregious conduct online. If speech on an open-access publishing platform is criminal, law enforcement needs partners who are able to respond to search warrants, and the Stored Communications Act gives LEAs the legal framework in which to obtain them. If speech is not criminal but privately damaging, if it’s really that damaging, there should be a private cause of action against the user who created the content and that action should be brought against that user, and that user alone.

If it’s neither criminal nor tortious, really, who cares? There are 7+ billion people on the planet. We can’t silence everyone who disagrees with us. Nor should we want to try.

Screen Shot 2019-08-08 at 11.21.31 AM.png

1 thought on “What Section 230 of the Communications Decency Act actually says”

Comments are closed.