Not Legal Advice, 13 Feb 2020: UK prepares to nuke its tech industry with “Online Harms” proposals

This is not legal advice, which is why the blog post series is called “Not Legal Advice.” Unless you’re paying me, I’m not your lawyer. Proceed accordingly.

First, it’s 2020. It seems like yesterday it was 2006. Time flies.

Second, here’s the latest from the United Kingdom. As reported by the Independent:

Social media companies will now be regulated by broadcast watchdog Ofcom, giving it the ability to fine and police companies such as Facebook, Twitter and Instagram.

Digital media and culture secretary Nicky Morgan announced that under the new legislation tech companies would now be held accountable for the content on their platforms.

The new legislation means companies such as Facebook and YouTube will be judged on their “duty of care”, and be liable for exposing users to illegal or damaging content. Until now, companies including TikTok, Snapchat and Twitter have been for the most part been self-regulating.

tl;dr the UK government is planning to implement the Online Harms White Paper (the “White Paper”) it published in April of last year, despite the fact that the White Paper is dystopian and insane.

I could go chapter and verse about the UK Online Harms White Paper proposals, as some have done. However, I don’t want to do that, as much of the document, after a review, revealed itself to be a rationalization for implementing draconian Government control over digital speech. If you wish to read the document – which runs to 102 pages – you should. However, I can summarize the practical effects of the document very briefly as follows:

1) The British Government has a list of “harms” that it wants to expunge from the Internet

These “harms” fall into two general buckets.

First, there is a list of “harms with a clear definition” which will be banned. These include CSAM, immigration crime, extreme/revenge porn, harassment, hate crimes, encouraging suicide, inciting violence, and sale of illegal goods, that will all be banned.

By way of comparison, some of this content is illegal in the United States where most of these technology companies are based.

Some of this content is not illegal in the United States.

Using online platforms to advocate for things can be illegal in the UK under several statutes, including, inter alia, the “encouragement” offences under the Terrorism Acts (which can be committed recklessly). Generally speaking, however, advocacy is constitutionally protected in the United States. There are certain limits around material support for designated foreign terrorist organizations, but insofar as the domestic political situation is concerned advocacy falling short of incitement is fair game.

While the U.S. has a concept of “hate crimes,” “hate speech” – punishable in England by Sections 4, 4A, and 5 of the Public Order Act 1986, as amended, and the Communications Act 2003 for online communications – is not one of them. Simply uttering a hateful idea is squarely within the protection of the First Amendment – “the proudest boast of our free speech jurisprudence is that it protects ‘the thought that we hate,'” wrote Justice Alito in Matal v. Tam (2017) – and the same goes for printing it online. Mind you, the presence of hateful speech online might go to proving motive for some other underlying offense, e.g. if your speech is threatening, interfering with a candidate for elective office, or the like, which (in the case of hate crimes) is usually an aggravating factor which is considered at the sentencing phase.

But hate speech per se is not a crime in these United States. Hate speech per se is, under several different content-based speech statutes, capable of being a crime in England.

Similarly, “incitement” as such is not necessarily illegal to the extent that the incitement is sufficiently remote from the possibility of actual violence being carried out. The applicable rule comes from the “imminent lawless action” test set down by Brandenburg v. Ohio, for example, and advocating in favor of violence or encouraging suicide can be constitutionally protected. Iin the latter case, in most cases but not all – see, e.g., the manslaughter conviction of Michelle Carter, which – surprisingly – was denied cert in SCOTUS last month.) This is down to the fact that the U.S. First Amendment was designed to abolish forever English political crimes like seditious libel and Scandalum Magnum (an ancient fake news misdemeanor that was seldom used, as it required the prosecution to prove the publications were false – which seditious libel did not require).

U.S. technology companies are not obliged to take down or remove illegal material, subject to narrow and specific statutory exceptions, and are immune for its existence on their servers as long as they do not “materially develop” the content, due to the operation of Section 230 of the Communications Decency Act. That notwithstanding, they are obliged to respond to legal process from law enforcement and/or civil litigants when served with it. Accordingly tech companies with US operations are well placed already to answer legal process in relation to suspected offenses of these types and will have routine correspondence with state and federal law enforcement to respond to subpoenas, search warrants and emergency disclosure requests.

Companies will also have efficient means in place to deal with CSAM. When an interactive computer services provider in the U.S. detects CSAM, they’re already subject to a mandatory reporting obligation to the National Center for Missing and Exploited Children (“NCMEC”, pron. “Nick-Mick”) and must put in place a legal document hold for 90 days pending receipt of legal process from the FBI or other law enforcement agency.

Second, there is a list of  “harms with a less clear definition,” including cyberbullying, trolling, extremist content, “disinformation,” violent content and, advocacy of self-harm.

These categories of speech are, generally speaking, not subject to prior restraint in the United States and in some cases are in fact protected speech under the First Amendment to the U.S. Constitution.

In some cases (bullying, intimidation) the position is slightly muddier as we have to ask when this type of conduct crosses the line from free speech into a common law offense like stalking or threatening, and there may be private causes of action available to the victim (libel, emotional distress, intrusion upon seclusion) which can also be wielded by the victim in a court of law against the perpetrator.

Screen Shot 2020-02-13 at 1.02.28 PM.png

2) The British Government will force tech companies to police these “harms,” even where they are legal in England, and including many “harms” that are totally legal in the United States

As things currently stand, for the most part – aside from the mandatory reporting obligation mentioned above – US tech companies are not required to police user content. They’re required to respond to legal process from US courts. They’re required to respond to subpoenas that have been domesticated in a state where they can be served. Apart from that, they’re basically free to let their users sling whatever shit they want to at one another and are immune from civil liability for doing so under Section 230 of the Communications Decency Act, which I explain here.

The British government has decided

The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.

Compliance with this duty of care will be overseen and enforced by an independent regulator.

All companies in scope of the regulatory framework will need to be able to show that they are fulfilling their duty of care. Relevant terms and conditions will be required to be sufficiently clear and accessible, including to children and other vulnerable users. The regulator will assess how effectively these terms are enforced as part of any regulatory action.

The regulator will have a suite of powers to take effective enforcement action against companies that have breached their statutory duty of care. This may include the powers to issue substantial fines and to impose liability on individual members of senior management.

This proposal is actually very similar to certain provisions in the (likely unconstitutional) proposals being promulgated in the U.S. by Senator Lindsey Graham and Attorney General Bob Barr, cynically named the “EARN IT” Act.

People and companies complained. The British replied:

The Online Harms White Paper set out the intention to bring in a new duty of care on companies towards their users, with an independent regulator to oversee this framework. The approach will be proportionate and risk-based with the duty of care designed to ensure companies have appropriate systems and processes in place to improve the safety of their users.

The White Paper stated that the regulatory framework will apply to online providers that supply services or tools which allow, enable or facilitate users to share or discover user-generated content, or to interact with each other online. The government will set the parameters for the regulatory framework, including specifying which services are in scope of the regime, the requirements put upon them, user redress mechanisms and the enforcement powers of the regulator.

The consultation responses indicated that some respondents were concerned that the proposals could impact freedom of expression online. We recognise the critical importance of freedom of expression, and an overarching principle of the regulation of online harms is to protect users’ rights online, including the rights of children and freedom of expression. In fact, the new regulatory framework will not require the removal of specific pieces of legal content. Instead, it will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.

To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that may not be illegal but has the potential to cause harm, such as online bullying, intimidation in public life, or self-harm and suicide imagery.

Couple of things going on here.

First, the British government claims that they’re walking the proposal back because they promise to only police illegal speech and will let be other types of legal speech, e.g. “disinformation” and “trolling.” (TBD pending draft regulations and codes of practice.) The problem of course is that British speech codes are so vaguely drafted that any speech which is even mildly offensive can be, and is, caught within the definition of “illegal content.” There are reported cases where reading from the works of Winston Churchill or a Bible verbatim has been enough to result in arrest. We’re not dealing with a free country here.

The Public Order Act 1986, Malicious Communications Act 1988, Communications Act 2003, Terrorism Act 2000, Terrorism Act 2006, and Racial and Religious Hatred Act 2006, Part 3 would all be struck down in the United States, either for not being content-neutral, overbreadth, or vagueness (see e.g. the ratio of Norwood v. DPP, which is the current state of the law, versus the ratio in DPP v. Redmond-Bate, which preceded Norwood, was how the law on offensive speech stood in 1999, and was arguably overturned by Norwood). We have seen, time and again and as I expand on more fully here, what would be fairly inoffensive or even benign speech in the U.S. draw a conviction from an English magistrates’ court which is upheld on appeal.

Second, although the framework “does not require the removal of,” i.e. does not create a regime of mandatory takedown orders for, legally compliant content (subject of course to the proviso that virtually any offensive speech is capable of being illegal in the UK), the framework does not need to create a mandatory takedown regime for the British government to be able to force companies to remove legal content of which it disapproves. This has the advantage of being more plausibly deniable (copies of specific orders saying “take down this post” signed by an OFCOM official has a “Ministry of Love” vibe to it and won’t look good in the press, s a policy manual saying “this type of post is harmful” won’t offend the nanny staters quite as much).

To the extent that a code of practice adopted by OFCOM penalizes social media companies for hosting speech which is highly offensive but not illegal, social media companies will be obliged to remove the content if they wish to avoid the British penalty.

See e,.g. the UK’s Counterterrorism Internet Referral Unit, or CTIRU, operated by the Met. CTIRU sends notifications to interactive computer services providers of content the British government considers illegal under antiterrorism laws, generally extreme political content. CTIRU does not, however, issue process e.g. search warrants or orders e.g. RIPA notices with a view to ascertaining the identity of the sender and enforcinfg the law in relation to that content.

CTIRU is a censor. The consequence of the notification is that the provider could be held liable in a British court for the content; the e-commerce directive provides similar coverage to online service providers as the US Section 230, but there is a proviso under Art. 14(a)(1) of the e-commerce directive that “actual knowledge” of illegal content removes that immunity.

So by conceding that the

new regulatory framework will not require the removal of specific pieces of legal content

…nothing has really been conceded at all. As we said above, it’s not hard for a British prosecutor to argue that speech which offends – no matter the content – is illegal. And a promise that Ofcom won’t be able to compel the removal of  specific “offensive but legal” content is not the same thing a promise by the government that it won’t allow OFCOM to penalize social media companies for hosting it. Although the existence of penalties for failure to comply with the Online Harms regime plus notification under Article 14(a)(1), which still applies in Britain during the transitional period should not constitute a “political content takedown order,” it should effectively amount to a strong political takedown request or suggestion, with penalties possible if enough of such suggestions are ignored.

Third, it is still unclear what obligations companies will have to actually comply with. The Government says that

regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that may not be illegal but has the potential to cause harm

but of course, if we look back to the White Paper, we’re not going to know what those obligations are for some time. It seems that any Bill will delegate most of the authority for developing these responsibilities to OFCOM, the British telecommunications regulator, which will then

[set] out what companies need to do to fulfill the duty of care, including through codes of practice” and take “prompt and effective enforcement action in the event of non-compliance (as set out in Chapter 6)”.

These powers may include, inter alia, the power to levy fines, compel additional information regarding the breach of the practice code, compel “third party companies to withdraw any service they provide that directly or indirectly facilitates access to the services of the first company, such a search results, app stores, or links on social media posts,” mandatory ISP blocking, and creating new crimes for failure to obey OFCOM’s diktats.

This is truly Orwellian. The British government is suggesting it should have the power to order companies falling under its jurisdiction to destroy any other company which refuses to obey British content standards but is following the content standards of its home jurisdiction (otherwise it would have been shut down already by domestic authorities).

If enacted, this should be a frontal attack on the First Amendment.

3) Global enforcement will be complicated and likely ineffective

Speaking as one who advises small companies, all of this compliance is going to be extremely burdensome and make the U.S. look like a much more attractive place to open up shop online (which it is already, but will be more so if these proposals are implemented).

Much of this will be hard to enforce. The real worst-of-the-worst baddies will not wind up using services like Facebook but will likely wind up running their own metal and rolling out their own cryptosystems. (Baddies using mainstream services give away their IP, user agent strings, and other identifying data which makes them easy to find.)

Since the baddies can migrate off of Facebook as easily as one logs into another service using OAuth, the only real, effective purpose of this proposal is to turn OFCOM into a morality regulator online that makes social media firms enforce British legal conventions on speech and conduct… without the British having to expend police and court time and resources to get the desired result. As more and more decentralized content providers e.g. ActivityPub or LBRY crop up it will be impossible to find a corporate entity to hold to account for web content which will be sharded and stored overseas.

If enacted it is therefore likely to only affect essentially law-abiding but politically edgy British domiciliaries, like Count Dankula, using British services or US services that are big enough to want to maintain corporate presence in the UK. So all of the big players, but by no means most of the players in numerical terms.

Companies in the United States don’t have to obey British court orders. To the extent a British regulator sought domestication of a British regulatory determination or court order in a U.S. state such that it would become binding on a U.S. person once served, under no circumstances would a British determination or court order survive that process if it were unconstitutional.

Most of the Online Harms regime, as proposed, would be unconstitutional on its face and virtually all of the Online Harms regime, as proposed, is likely to be unconstitutional as-applied. Orders issued thereunder and penalties levied will therefore be unenforceable before U.S. courts (in e.g. an MLAT procedure or where seeking to enforce a money judgment).

My prediction is that many online companies will choose to re-domicile or withdraw from the UK before subjecting themselves to this hugely burdensome regulatory regime. If this regime is enacted, OFCOM will essentially attempt to serve as the world’s morality police; it will not, however, have any power outside of the UK’s territorial boundaries.

If the UK feels like destroying its tech industry with burdensome regulation and extremely labor-heavy (and legal advice-heavy) compliance obligations, go right ahead, knock yourselves out. More billable hours for me.

Two facts belie the stated purpose of the proposal, to “impose a duty of care to protect social media users from online harms.”

  • First, the UK government and its counterparts in the USA already have adequate powers to address serious crime.
  • Second, both governments have inadequate powers to restrain trolling and offensive political rhetoric – in the US because of the First Amendment, and in the UK, because there are not enough prosecutors and police to investigate, try, and convict every Internet troll that violates the provisions of the Communications Act 2003.
  • Third, trolling and offensiveness are what users want. if users didn’t want to encounter trolls and edgy politics on the Internet, they would not be on the Internet and have social media accounts. If they dislike the experience they are perfectly capable of either logging off or using their block buttons, or in extreme cases, bringing an action against other internet users.

So we see the purpose of the proposal is not to restrict content to protect the people, who are perfectly capable of protecting themselves. It is to protect the state. In particular it serves to protect those programmatic objectives of the state which are most subject to vitriolic criticism on the Internet, as well as adjacent “offensive” content, in relation to which prosecutors will use the broad discretion granted to them under English speech codes to suppress and terrorize anyone who dares possess and express a controversial, irreverent, or iconoclastic thought.

3 thoughts on “Not Legal Advice, 13 Feb 2020: UK prepares to nuke its tech industry with “Online Harms” proposals”

  1. Setting up shop in Europe has been a mistake for a long time. Why wouldn’t you want to set up in the US or a favorable offshore jurisdiction? Nothing but downsides. Enjoy being held liable for hate speech posted by your users because you were unwilling to moderate all comments in advance.

  2. Preston – I get your emails on the regular and appreciate the work. Simplifiing the implications of these big incoming issues is helpful and informative. I have forwarded this on to some others working in international firms (with US/UK offices).

Comments are closed.