I was on Galaxy Digital’s podcast, Galaxy Brains, talking to my friend Alex Thorn about legal issues worthy of consideration in relation to the recent arrest of Telegram founder Pavel Durov. Links to various platforms below:
Today we learn that Pavel Durov, founder of the popular messaging app Telegram, has been arrested as his private jet landed in France.
Early indications are that the arrest stems from Telegram’s alleged noncompliance with French requests for content moderation and data disclosure:
BREAKING: #Telegram CEO Pavel Durov arrested by French authorities.
Early official comments to French media suggest this follows from France's displeasure with Telegram's moderation & compliance with official requests(?).
What does this mean for you, my readers (predominantly American tech people)?
A bit of legal background is called for. Most social media companies of global significance that are not Chinese are headquartered in the United States. This is no accident; the United States (wisely) undertook policy moves in the late 1990s to minimize liability for the operators of online services, most notably the enactment of Section 230 of the Communications Decency Act, which (essentially) says that operators of social media websites are not liable for the torts or crimes of their users.
There are of course some, very narrow, exceptions to this rule. For example, illegal pornography is subject to a mandatory takedown-and-reporting regime under 18 U.S. Code § 2258A. I hasten to add that complying with this law is table stakes for a user-generated content business, compliance tools like PhotoDNA are widely available and free to use, and I would be shocked if Telegram didn’t comply. There’s also FOSTA-SESTA, which prohibits operators of online platforms from running services which knowingly support sex trafficking or prostitution with the intent to facilitate the same (see: United States v. Lacey et al. (Backpage), 47 U.S. Code § 230(e)(5)), 18 US Code § 2421A). This law will be a major compliance concern for any dating app, and is a reason why sites which once had “personals” listings like CraigsList, which were non-core to the rest of their offering, got rid of those “dating”-specific features. One can still post a prostitution ad in the used boat listings, but Craigslist can’t be said to have intended to facilitate that behavior.
Other than that, though, social media website operators are generally not liable for the torts or crimes of their users. Nor are they liable under aider/abettor theories if they just passively host the content. (See: Twitter v. Taamneh, 598 U.S. _ (2023) – civil liability for aiding and abetting, at least on this side of the pond, requires “knowing and substantial assistance,” and federal criminal liability – as state criminal law is disapplied by Section 230 – requires specific intent to assist in the commission of a crime).
This means that if, for example, I use Facebook to organize drug deals, Facebook (a) is under no obligation to scan its services for unlawful use and (b) is under no obligation to restrain that use, and will generally be immune civilly from my misuse unless Facebook “materially contributes,” i.e. specifically encourages, that unlawful use (see e.g. Force v Facebook, 934 F.3d 53 (2d Cir. 2019), where Facebook was held not civilly liable under JASTA to victims of Hamas, which used Facebook to disseminate propaganda online; see also Taamneh, supra), and will not be liable criminally (a) under state criminal law by operation of Section 230, and (b) under federal criminal law to the extent that Facebook does not willfully and knowingly aid, abet, counsel, or procure the commission of the offense with specific intent, per 18 USC § 2.
Most countries do not have such a permissive regime. France is part of that group. In 2020, for example, the Loi Lutte Contra la Haine sur Internet(Law against hate speech on the Internet) in relation to which global Internet companies can be fined $1.4 million per instance, and up to 4% of their total worldwide revenue, for failing to restrict “hate speech” (which in the United States constitutes “protected speech”) from their websites. Similarly, Germany has its law, the Netzwerkdurchsetzungsgesetz or “Network Enforcement Act” (sometimes referred to as the “Facebook-gesetz” but more commonly referred to by its acronym, the NetzDG), in relation to which politically inflammatory content must come down or the government has the power to impose fines north of EUR 50 million.
Not being a French lawyer, it is difficult for me to figure out exactly what legislative provisions are being invoked here. The charging documents or the warrant will tell us more when they’re published. I’m pretty sure we’re not looking at fine proceedings against Telegram Messenger, Inc. under the hate speech law e.g. or the EU DSA, because if we were, Durov would not have been dragged off a plane in handcuffs. TFI Info, the French media outlet which broke the story, suggests that the charges might be something along the lines of an aiding and abetting offense, or possibly conspiracy:
[The Ministry of] Justice considers that the lack of moderation, cooperation with law enforcement and the tools offered by Telegram (disposable number, cryptocurrencies, etc.) make it an accomplice in drug trafficking… and fraud.
More will be revealed when the arrest warrant is made public. If, for example, it is revealed that Durov did in fact actively assist criminal users with access to the platform, for example a drug user wrote to the support channel stating: “I would like to sell drugs on your platform. How do I do this?” And Durov replied to that with assistance, then his goose would be just as cooked in America as it would be in France.
If, however, the French are simply saying that Durov’s failure to police his users or respond promptly to French document requests is the crime (which I suspect is the case), then this represents a dramatic escalation in the online censorship wars. What it means is that European states are going to try to extraterritorially dictate to foreign companies what content those companies can and cannot host on foreign-based webservers.
If correct, this would represent a major departure from the U.S.-compliant approach most U.S.-headquartered social companies currently take, which has generally governed the global compliance strategies of most non-China social media companies, including any which offer greater or lesser degrees of full encryption on their services (Telegram’s “Secret Chats” feature, WhatsApp, and Signal among them). In brief, platforms thought that if they didn’t specifically intend their platforms to be put to criminal use, they’re unlikely to find themselves on the receiving end of criminal charges. That’s not true anymore, apparently.
Telegram is not the only company in the world which has a social media platform used for unlawful purposes. Facebook’s popular encrypted messaging app WhatsApp has, famously, been used for years by the erstwhile non-state terror organization in, and now rulers of, Afghanistan, the Taliban. This fact was widely known by NATO generals and reported in the press during the Afghan war, and was even reported on again in the New York Times as recently as last year:
About a month after Mr. Inqayad, the security officer, was unable to reach his commanders during the night operation, he begrudgingly bought a new SIM card, opened a new WhatsApp account and began the process of recovering lost phone numbers and rejoining WhatsApp groups.
Sitting at his police post, a refurbished shipping container with a hand-held radio, Mr. Inqayad pulled out his phone and began scrolling through his new account. He pointed out all of the groups he is a part of: one for all of the police in his district, another for the former fighters loyal to a single commander, a third he uses to communicate with his superiors at headquarters. In all, he says, he is a part of around 80 WhatsApp groups — more than a dozen of which are used for official government purposes.
Of course, the Taliban is now Afghanistan’s entire government – at all levels – and Afghanistan is an enemy of the United States, Facebook’s home country. If Facebook were serious about keeping guys like this off their services, the most effective way to do so wouldn’t be by playing whack-a-mole with individual government employees, as Facebook does, but rather by banning the entirety of Afghanistan’s IP range and all Afghan phone numbers, and disabling app downloads in-country, which Facebook does not. Facebook chooses the ineffective measures rather than the effective ones.
Yet, Facebook CEO Mark Zuckerberg lives comfortably in an estate in Hawaii, rather than in exile, and presumably doesn’t have a warrant out for his arrest in any country, whereas Durov obviously did. I grant it is possible (even probable, given that Telegram runs on a skeleton crew of 15 engineers and approximately 100 staff worldwide) that Facebook is more responsive to French judicial requests than Telegram. However, when you’re running a globally accessible encrypted platform, it is inevitable – repeat, inevitable, as in an absolute certainty – that criminal activity will take place that is beyond your view or your ability to moderate.
If Telegram stands accused of violating French law because of its failure to moderate, as media reports indicate, an app like Signal – which demonstrably cannot respond to law enforcement requests seeking content data and has similar features to Telegram – is guilty, too, and no U.S. social company that offers end-to-end encryption (or its senior leadership) is safe. Do we really think Meredith Whitaker should wind up in prison if she decides to go to France?
(Image licensed under the Pixabay license.)
Many questions remain. For now, this is not looking good for the future of interactive web services based in Europe. American tech entrepreneurs who run their services in accordance with American values – free speech and privacy through strong encryption, in particular – should not visit Europe, should not hire in Europe, and should not host infrastructure in Europe until this situation is resolved.
UPDATE, 26 AUGUST 2024
Basically my hunch was correct:
Finally the charges against Pavel Durov has been published:
– Complicity – Administration of an online platform to allow an illegal transaction in an organised band,
There’s a laundry list of crimes there. Most of them relate to the French crime of complicité which roughly equates to American aider/abettor liability – involving knowingly facilitating, helping or assisting the preparation of a criminal offense, procuring an offense (through a gift, promise, threat, order, or abuse of authority or power) or giving instructions to commit an offense (see Articles 121-6 and 121-7 of the French Code pénal).
What’s important here is that in the U.S., aider/abettor liability requires a specific intent to bring about the criminal result – that is, the commission of the crime is the defendant’s object. U.S. social media companies simply failing to police their users doesn’t rise to this level, which is why U.S. social media company CEOs don’t, as a general rule, get arrested for the crimes of their users by the American government. The CSAM allegations, in particular, would only rise to the level of a crime in the USA if Durov failed to comply with the notice-and-reporting regime the U.S. has for such content. The simple existence of the criminal content absent any notice doesn’t give rise to criminal liability.
Here, the French government is accusing Durov of being complicit with – i.e. aiding and abetting – criminal activity and also unlicensed provision of “cryptological” software, with encryption products subject to prior government authorization before their use in France will be approved. The list of crimes he’s accused of facilitating includes what appears to be a rough approximation to criminal RICO, CSAM, money laundering, narcotics trafficking, hacking, and providing unlicensed cryptography.
It would make zero sense for Durov to do any of these things with specific intent. For example, intentionally engaging in narcotics trafficking is illegal virtually everywhere on Earth; the crime is punishable by death in the United Arab Emirates, where Durov is a citizen and ordinarily resides, and can attract up to a life term in the United States, which is historically very good at extraditing people.
We’ll need wait for the evidence to come out before reaching any firm conclusions on this point. If I had to guess, in a world where every platform hosts unlawful activity to some extent, this looks like selective enforcement. I would also guess that Durov was not “aiding and abetting” as the U.S. would understand it and that this French enforcement action is an overbroad application of French law to punish a perceived political enemy, with the French security state trying to use local doctrines in a novel way to try to police a foreign company with moderation policies it (and likely each of its security cooperation partners in the EU and across the channel in the UK) regards as too lax.
In the absence of a lot of evidence showing that Durov and Telegram specifically intended to commit these crimes or bring them about, there is no reason why similar charges could not be laid against any other provider of social media services in France whose moderation practices are anything less than perfect, in particular social media services which provide end-to-end encryption.
Summing up: for the time being, if you run a social media company, or if you provide encrypted messaging services, which are accessible in France, and you’re based in the United States, get out of Europe.
Was in England last week and fired some editorial broadsides against the British online censorship apparatus, which I share here for my regular readers.
The first was a piece in Pirate Wires coauthored with my friend Allen Farrington (paywalled, free two-week trial though, worth signing up as the content is great):
The second, a GB News spot explaining my 2020 proposal for a UK Free Speech Act:
American free speech lawyer @prestonjbyrne goes on GB News to explain how the British aren’t free to speak. He explains in clear terms the legal differences between the US & UK that allow the latter’s citizens to be prosecuted & jailed for speech. pic.twitter.com/qCOcMWrqmg
Given current events, this is the first of a lot more I’m going to have to say about UK censorship over the coming months. It’s really refreshing to see the level of interest in reforming free speech law in the UK, which is the highest I have ever seen it.
For context, I’ve been tracking this issue very closely since 2010; the Human Rights Act 1998, which incorporated the European Convention into domestic law and operates as the foundational law on freedom of speech in the United Kingdom today, entered into force in 2000, with many of the foundational precedents at appellate level which enable censorship by first instance judges today (Norwood, Abdul, and Hammond) being decided in the first decade of the 21st century.
ITV reporters asked the Prime Minister this morning whether online right-wing figures Stephen Lennon (aka Tommy Robinson) and Andrew Tate should be banned from the Internet.
Robinson himself, who has currently fled the country and is living overseas, posted this clip of the Prime Minister’s response to this question:
ITV asks the UK Prime Minister if me and @Cobratate "should be allowed on social media"?.
🤣 What does that even mean?
They hate not having control of the narrative anymore.
None of us have "incited" anything, merely commented on it, showed bits they've hidden. pic.twitter.com/HaSiUMaMND
“The law applies online. But if you’re inciting violence… equally, anyone who has been found to have committed a criminal offence online can expect the same response.”
To understand what, exactly, the PM means by this, it is important to understand what, exactly, constitutes an offence online in England and Wales. This is a free speech question; not because all online speech is free speech, but because the default position we should assume for online speech, which is fundamentally incapable of causing a direct consequence in the physical world without additional, causally remote human intervention, is that the overwhelming majority of it will be lawful, and only in very rare exceptions will it be unlawful.
Using the Internet to post unlawful threats, for example, is not permissible anywhere in the world, including countries with the strongest free speech protections of all (to wit, the United States). So too is “direct incitement” (more on that below).
Using the Internet to advocate for violence or cheer it on, without engaging in “direct incitement” (so-called “indirect incitement”), is another matter. Indirect incitement is legal in the United States but illegal in much of the rest of the world.
Using the Internet to express support for the political aims of the rioters whilst not directly encouraging violence is also (quite unambiguously) allowed in the U.S., but depending on how the messages are interpreted by the hearers, could constitute a criminal offence in the UK.
The difference between these categories of speech is not widely known, acknowledged, or understood by UK politicians, prosecutors, judges, or voters. This is because the UK legal system, which has never had a legal provision like the First Amendment, lacks the doctrine to draw distinctions between them.
As a result, daring to utter speech that the state disapproves of – and by this, we mean speech which is not aligned with conventional wisdom held by large swathes of the civil service and opinion-makers who influence whether executive action does or does not happen, particularly in law enforcement – can get you imprisoned. There are of course some legal complexities around how a prosecutor gets to that point, but in its essence, that’s basically how “free” speech in the UK works.
In the U.S. the basic operating assumption is that virtually all political speech is allowed
As a general rule, written or spoken political speech in the United States is not censorable or punishable by the state unless it falls within a limited number of categories. These include true threats (see e.g. Watts v. United States), revealing classified information as one who has an obligation to retain its secrecy (but not as a journalist – see e.g. New York Times v. United States), communications regarding a conspiracy to commit some other criminal act, and direct incitement. Speech which the government may not restrain include indirect incitement to violence (Brandenburg v. Ohio), and discriminatory expression, even when such expression is in deliberately offensive terms (National Socialist Party of America v. Skokie; Matal v. Tam).
There is also the (constitutionally questionable) “fighting words” doctrine, Chaplinsky v. New Hampshire, which is sometimes applied in arrests and prosecutions for which is of questionable application online given the fact that is expressed to apply to situations where there is a risk of breach of the peace (i.e. face to face).
The basic starting position for criminal liability, then, is that speech which expresses offensive or even hateful thought is not unlawful in the United States. One exception to this rule is “direct incitement,” a specific category of political speech defined in U.S. First Amendment jurisprudence in the case Brandenburg mentioned above.
Speech which constitutes direct incitement to lawbreaking is that which is (a) directed towards the incitement or production of imminent lawless action and (b) likely to produce or incite such action. An example would be, for example, suggesting to a mob that it would be a good idea to beat up a nearby lone counterprotestor in the physical presence of the mob and the counterprotestor. Advocacy of illegal action is permitted, however, where the limbs of the Brandenburg test are not satisfied.
So, for example, an online post which stated that it was morally right and proper to beat up counterprotestors – or, in another example, providing a moral defense of looting during a period of civil unrest, such as was done with the book In Defense of Looting in 2020 – might get you put on a watch list, but it shouldn’t result in your arrest. Such speech, advocacy without proximity and imminence, is known as “indirect incitement.” “Extreme” political speech such as that advocating for a revolution, overthrow of the government, or illegality, which falls short of “direct incitement,” is generally what the U.S. terms “indirect incitement.”
And in the U.S., even this sort of speech is allowed.
In the U.K., the basic operating assumption is that “extreme” political speech is not allowed
The basic operating assumption for the UK is that freedom of speech, as Americans understand the term, does not exist in the United Kingdom.
No U.S. politician would ever be asked, “should the government ban [x] from using the Internet because of their political positions?” Because the answer, every time, at every level of government, would be a resounding “no.”
In the UK, all of the categories of banned speech in America are also banned: threats, leaking intelligence secrets, and conspiracy, for example. These are not freedom of speech problems.
Where the jurisdictions diverge is that in the UK, political speech which would be allowed in the United States is banned or bannable. This applies not just online but in multiple domains, in the streets, spoken or written, whether incitement or not.
It appears that arrests are already being made in relation to the Communications Act offence. It states that it is a crime for a person to “[send] by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene, or menacing character” and if “for the purpose of causing annoyance, inconvenience, or needless anxiety to another,” sends such a message or causes such a message to be sent.
The leading case on what this means is DPP v Collins[2006] UKHL 40. In Collins, the defendant, a man of the age which we would term a “Boomer,” “made a number of telephone calls” to his local MP leaving recorded messages about immigration policy, referring to various ethnic groups by various ethnic slurs. The lower courts held that while offensive, the language was not grossly offensive and so a conviction could not be sustained.
The House of Lords, then the UK’s highest court, disagreed. While finding the language “grossly offensive,” the court – by its own admission – declined to articulate any objective principle by which speech might be tested and determined to fall within or outside of the range of acceptable conduct except by sticking a finger in the wind and, entirely unscientifically and subjectively, guessing what an indeterminate number of other people, who are not witnesses or parties to the case, and whose views are not in evidence, are likely to think about the speech in question:
“Justices must apply the standards of an open and just multi-racial society, and that the words must be judged taking account of their context and all relevant circumstances. I would agree also. Usages and sensitivities may change over time. Language otherwise insulting may be used in an unpejorative, even affectionate, way, or may be adopted as a badge of honour (“Old Contemptibles”).There can be no yardstick of gross offensiveness otherwise than by the application of reasonably, but not perfectionist, contemporary standards to the particular message sent in its particular context. The test is whether a message is couched in terms liable to cause gross offence to those to whom it relates.
As we can see, this is a much lower bar than “direct incitement” and arguably even lower than “indirect incitement” – in that the speech Section 127 captures simply requires offence (and intent to offend), and little else. The kind of speech that can be controlled online in the UK is thus a vastly larger superset inclusive of, but stretching far beyond, what is banned in the United States, and includes speech which is intentionally at the center of the First Amendment protection.
For a justice of Lord Bingham’s stature, it is remarkable how shortsighted he was in formulating this test in these terms. In case it is not obvious, such a test installs those who would take offence – rather than those who would give it – as the ultimate arbiters of whether speech is acceptable, or not.
Like so many other UK speech codes, Section 127 of the Communications Act 2003 is structured as a heckler’s veto. Which vetoes wind up having force of law depends on how the permanent bureaucracy interprets Lord Bingham’s “contemporary standards to the particular message sent in its particular context” and decides to employ the enormous discretion the law grants it.
Charging decisions in this area are, naturally, political. We saw this subjectivity in play when Scotland’s sweeping hate crime legislation entered into force in April of this year, only for Police Scotland to be inundated with thousands of complaints about speech and behaviour from then-Scottish First Minister Humza Yousaf which plainly violated the facial provisions of that Act (in Yousaf’s case, if they had been made when the Act was in force).
The Scottish police responded to the deluge of complaints by ignoring most of them, i.e., exercising law enforcement discretion to interpret behavior as being not captured by the overbroad law even when the plain language of the law, and past practice in relation to similar laws (see e.g. the inclusion of Tory MP Murdo Fraser on the Scottish “Non-Crime Hate Incident” log for social media comments he made) suggests otherwise.
Who, generally speaking, isn’t rescued by the exercise of that discretion? Why, holders of views that offend, of course, assuming that the level of public outrage over that offence rises to the level that the Director of Public Prosecutions notices and only in circumstances no other political interest groups with any power are likely to raise an issue about it. In the past, this has included a Glaswegian who said “the only good Brit soldier is a deed (dead) one” on the death of national centenarian folk hero Sir Tom Moore, a group of Metropolitan Police officers who sent offensive messages to each other in a private WhatsApp group (prompting the Spectator magazine to ask: “Have we got to the position where we are policing private speech for politeness?” Answer: yes) or jailing Matthew Woods for 12 weeks for making an offensive joke on Facebook about a missing child while drunk.
Anyone who has deliberately told an off-color joke which is readable or hearable within the UK has likely violated this law. This is likely the substantial plurality, if not the majority, of the population. But the British state is unlikely to go after all of these statements as doing so would be political suicide. Instead, it picks on easy targets, and relies on those prosecutions to chill speech among the rest of the country.
I should also not be especially surprised if the UK were to use certain provisions of the Terrorism Act 2006, specifically the “encouragement” offense, to prosecute a number of the biggest online cheerleaders of the riots. This would represent a substantial escalation in the country’s willingness to use draconian measures to suppress controversial but widely-held opinions.
Many British free speech activists say they are fighting to “preserve free speech in the UK.” They are too optimistic. My position, that free speech doesn’t exist in the UK, is based in the fact that the test of free speech occurs at the margins, and it is at the margins where the UK engages in extreme degrees of censorship.
That the range of permissible opinions in the UK is broader than that in North Korea is not question of kind, but of magnitude. In both places, you can still go to prison for daring to express wrongthink.
As long as that sanction still exists, pretending that the UK has “free speech” is negotiating over the boundary of acceptable nonviolent expression. What is needed to have a free speech right worthy of the name is the elimination of that boundary.
The UK should decriminalize political speech and use the resources freed up to crack down harder on public disorder
In a multiethnic democracy of nearly 70 million people, discussion of politics is likely to be heated and is likely to cause offence to a degree far greater than occurred in any of the above cases. Roughly a fifth of that population is hard-left and roughly a fifth is hard-right. It is inconceivable that there is any partisan opinion on any issue of consequence which is incapable of being expressed in a manner that causes grave offence to at least some double-digit portion of the UK’s residents:
The United Kingdom’s longstanding and current method for dealing with these politically inconvenient opinions has been to arrest its way out of the problem. This has failed. Possessing broad prosecutorial discretion to punish speech did not prevent these riots or dampen the spread of the viewpoints of those engaged in them; if anything, it may have aggravated them, with the perception of bias arising from the exercise of that discretion, based on the “two tier policing” accusations circulating in British political discourse, turning into a propaganda tool for the rioters and their apologists. Take away the discretion, by legalizing expression, and the Government would deny its detractors that very compelling rhetorical win.
I anticipate many arrests from this unrest, as the Government promised. Many of these will be for threats, direct incitement, and the coordination of illegal activity such as the burning down of hotel facilities housing asylum seekers. That sort of action is not “free speech” anywhere in the world and is rightly illegal.
We will also see hundreds of online posters arrested and charged for expressing political opinions in a manner that would be lawful to express in the United States.
The policy question is whether this latter category of defendants, should be defendants. In my opinion the answer to that question is “no.”
Viewpoint suppression doesn’t work. It just pisses people off and increases the potency of the public backlash when efforts at preference falsification inevitably fall apart. The UK should perhaps consider using speech as America does, as an emergency pressure release valve for political tension, and provide residents with incentives to work out their differences in the marketplace of ideas instead of the streets – if throwing a brick or posting a tweet will each result in a criminal record, all things being equal, a lot of angry people will choose the brick.
Such changes would require a radical reimagining of UK free speech law, the decriminalization of online political speech falling short of threats or direct incitement, and the redirection of the substantial police resources currently focused on it.
The UK Labour Party has announced that it is going to “pause” implementation of the Higher Education (Free Speech) Act 2023 (the “Act”). This will, in all probability, likely be followed by a repeal.
This is a good thing.
The Act, when enacted, was celebrated by British free speech activists such as Baroness Fox and the Free Speech Union. I am not sure why: far from creating a broadly-enforceable right for university students to speak their mind, the law instead created duties on the schools themselves – institutions which serve as temporary babysitters – to protect those freedoms in accordance with rules they themselves set, so-called “codes of practice.”
Solemnly ordering foxes to write rules for the henhouse is a curious approach to reducing predatory behaviour. It is all the more curious when we consider that the Act’s penalty for failing to comply with those duties took the form of a new civil action for “loss” suffered as a result, payable through taxpayer funds (as nearly all UK universities are public bodies), which in most cases will amount to travel, lodging, and booking expenses which, added together, likely cost less than the legal fees required to file a lawsuit, meaning that such lawsuits are exceedingly unlikely to be filed.
Nor does the bill address the deeper problems with free speech in British society on the streets, in print, or online. The Israel-Palestine protests have seen Jews threatened with arrest for wearing a Kippah in public, or pro-Palestine protestors arrested for taunting counterprotestors and nothing more.
The picture online, the primary forum for political thought everywhere in the world, is no better, with long-time censorship laws like the Malicious Communications Act and Section 127 of the Communications Act 2003 still in force and the new Online Safety Act promising to impose duties on social media companies to act as content police for any material deemed violative of a nebulous “Duty of Care” imposed upon them by the British state.
In Scotland the position is even worse, where viewpoint discrimination has been expressly baked into substantive law.
I thus celebrate the Act’s untimely death, and the fact that it happened under a Labour government. This is not because censorship isn’t one of the most, if not the most, pressing civil rights issue in the United Kingdom today (it is and continues to be), but because perhaps now outfits like the FSU which have tried to play things safe and down the middle, sending a couple of dozen strongly-worded letters to employers and adhering to strict political neutrality, will come to realize
that free speech is a partisan issue;
as such, protecting free speech requires an organized partisan response through the political system; and
that anything short of profound reform to UK free speech law which entitles those victimized by public bodies to permanently enjoin them from the infringing conduct, and does so across all domains, is a failure.
The Adam Smith Institute’s draft UK Free Speech Act was written with the above in mind. A milquetoast bill which penalizes censorship with a nominal payment from the taxpayer for a hotel bill is not a solution to the wide-ranging problems the UK faces. No progress will be made by crowdfunding piecemeal victories under the Equality Act before employment tribunals. What is needed is a political effort, undertaken in conjunction with political parties, to enact a pervasive new bedrock legal principle, one that smashes existing norms and practices across society, by entitling a claimant to obtain permanent injunctive relief against any public body, from a university to Ofcom, from the police to the courts themselves, daring to choose what opinions people can and cannot express.
As we have seen with the quick work the Labour government made of the Act, anything short of radicalism in this matter is easily brushed aside with even the gentlest change of the winds.