Not Legal Advice, 4/28/20 – With Kik and Telegram Cases, the SEC Tries to Kill the SAFT

Check it out over on CoinDesk.

I should add:

Not Legal Advice, 11 March 2020 – How to ensure your startup survives the Coronavirus

As seen on CoinDesk, welcome back to another edition of Not Legal Advice.

This week, we take a slight detour from securities regulation and statutory interpretation into the nitty-gritty of running a company in the middle of a global crisis, something which – fundamentally – involves thorny legal problems.

What everyone needs to remember is that the coronavirus outbreak is not the end of the world. It sucks, but when it burns out – as it surely must – life will return to normal and all of the assets will be very, very cheap.

See also: The Markets Were Already Vulnerable, Then Came Coronavirus

This isn’t the world’s first recession and it won’t be the last. It’s not the world’s first pandemic and it won’t be the last. The key for entrepreneurs is to keep a cool head about you, don’t do anything stupid (if you have never used firearms, for example, now isn’t the time to acquire one and start carrying it while wearing a gas mask on city streets), and adopt a war footing while you steer your companies through choppy waters for 12-18 months.

While the crisis persists, your company will have obligations it is expected to perform. When the crisis recedes and the courts reopen, your company will need to provide an accounting of its obligations and answer for any it has fallen short on in the meantime.

Here’s how:

1. Protect your employees.

In my opinion, the first job of early-stage founders isn’t to protect their investors, but rather their employees.

Cognizant that the formal legal duty of an officer of a company is to promote the success of the company for the benefit of its members, early-stage firms usually fall into one of two buckets – founder-owned, or founder-and-VC-owned – and the identity of the shareholders changes a lot about where a company’s business priorities tend to lie.

In my experience, purely founder-owned companies tend to view their closest staff – who help the company make money – as assets, and regard VCs as a distraction.

Founder-and-VC-owned companies, on the other hand, tend to regard their investors and investor relationships as a major asset of the company, at least until they manage to get the business moving under its own power. Investors’ interests tend to take precedence in such businesses.

There’s nothing wrong with either approach; sometimes the tech you’re building is so early stage that you have no choice but to accept investor funds if you want to spin up a business. However, keep in mind that (a) venture investment accepts a high degree of failure as inevitable and (b) failing to keep your employees safe from an epidemic may result in the sickness or death of the employee, possible onward transmission to third parties and adverse health consequences for you, your business, and society at large.

Put another way, the venture investors can afford to lose a little money. Your employees can’t afford to get sick. Now, not next week, not tomorrow, but today, is the time to write up and plan to implement policies around halting staff travel, staggered off-peak commuting, modified paid sick leave and disability cover, and working from home.

Communicate these policies to your employees. See e.g. Coinbase’s contingency plan as an example of best practice. These things may result in a slight reduction of productivity or less “face time” in the office, but they will save lives and they will protect your workforce. People who you will have to work with again, face to face, once the epidemic subsides.

2. Cut your burn rate. Now.

When the Saudis dropped the OPEC equivalent of a nuclear weapon on the markets, tanking the price of a barrel of Brent crude to $30, it became clear that the coronavirus crash was going to have some wider consequences for the U.S. economy – chiefly, the bankruptcies of many middle American shale oil firms.

These companies will be among the casualties of the coming recession. If you don’t want to be a statistic, you absolutely must plan for at least a year of highly adverse business conditions.

Don’t wait for things to turn around or hope that the markets will turn; previous globe-spanning epidemics have taken 12-18 months to fully shake out and, absent a pharmacological intervention which renders the Coronavirus epidemic an unpleasant but nonlethal illness, you should plan for the next year to be a very bumpy ride. Expensive office space, dead weight on the team – all of it needs to go, now. Don’t be afraid to make hard calls.

3. Whatever the deal is, close it. Now.

To quote Ryan Selkis, “The startup fundraising market just got absolutely f*cking walloped. Sequoia’s ‘Black Swan’ post will spook dealmakers, and lead to recut deals, startup layoffs, and distressed M&A.”

Following the above, if there’s a deal on the table – either an acquisition or a venture financing – on less than optimal but nonetheless acceptable terms, take it. Now is the time to go on offense in terms of closing any commercial transaction that will facilitate your business’ short-term survival or any return of capital for yourself or your investors.

See also: Bitcoiners in Europe Reflect on Economic Shocks as Coronavirus Spreads

The same applies to closing new customers. If you’ve only got 12-18 months of runway, start grinding on revenue – now.

VC investors are herd animals. Right now that herd is living out the conspiratorial prepper fantasy we in the tech crowd have entertained for years: hoarding freeze dried food, buying crossbows (most VCs live in San Francisco or New York, so they can’t own firearms) and preparing to hole up in bunkers or Bay Area apartments for the long haul. Your startup is not at the top of their list.

4. Review your insurance.

If you’re looking to get insurance coverage for the coronavirus and related business interruptions, I have bad news – there probably isn’t a prospective fix here, and it’s possible that a lot of markets that did cover this type of risk might go out of business.

This doesn’t mean that you’re not covered at all or that you shouldn’t put certain types of cover in place. If you’re a very early stage company, you’ll probably want to put basic coverage in place for e.g. general liability that some of your contracts and leases will require you to have.

If you already have insurance in place, review your policies. It’s possible to find coverage in surprising places – and the assistance of counsel can help you uncover it. If you manage to uncover a policy which happens to cover a Coronavirus-related loss, before filing a claim, make sure you reach out to counsel before you submit it to increase the likelihood of its success.

5. Review and restructure your contracts.

As part of your burn rate review, look at your supplier agreements, lease agreements, and other agreements that are costing you a lot of money and which you might do better without. If there’s a force majeure clause that permits you to terminate the agreement, consider whether doing so might not be a bad idea.

If the coronavirus has interfered with the contract such that performing it has been rendered essentially impossible, there may also be common law remedies like frustration or impossibility which you can invoke. There may even be express early termination provisions that are directly on point. If you know what the terms of your contracts are, this will help you to know which ones you can jettison.

Even if you think you can’t jettison them, it might be worth approaching your counter-parties to try to restructure the deal. You won’t get an abatement in your rent or released from a fixed term agreement if you don’t ask for it. A mutually agreed negotiation ahead of time is nearly always preferable to acrimonious litigation after the fact.

6. Put in place a succession plan.

In the eyes of a virus, a CEO and an intern are exactly the same; indeed, if the CEO is older, the CEO is likely more vulnerable to the virus than more junior members of staff.

Don’t, like Quadriga, give one person the keys to the entire kingdom. Have disaster recovery plans in place and a chain of command so that if one member of staff is taken ill or dies, the company can continue operating as a going concern. Back up your data in multiple geographic locales.

Stay safe out there.

Not Legal Advice, 2 March 2020 – The States Can’t Blockchain

As seen on CoinDesk, welcome back to another edition of Not Legal Advice.

I’ll let everyone reading this column in on a little secret: The definitions of “blockchain tech” used by various state legislatures to look technologically astute are something of a running joke among the hardcore crypto-lawyer set.

One exception to this is the definition used by Vermont and California, the least-bad definition of a chain I’ve read so far. Those laws refer to “a mathematically secured, chronological, and decentralized ledger or database.”

Simple, straight, to the point. I give California and Vermont a solid C-minus: the definition hits the high notes, but it also probably captures an instance of Postgres-XL that stores passwords as MD5 hashes. This is quite obviously not what the definition is supposed to do, but because it’s poorly drafted, that’s what it does.

Other states are far, far worse. Take, for example, Arizona’s definition, which says “blockchain technology” is

“a distributed, decentralized, shared and replicated ledger, which may be public or private, permissioned or permissionless, or driven by tokenized crypto economics or tokenless… protected with cryptography, is immutable and auditable and provides an uncensored truth.” 

“Uncensored truth.” What the hell does that even mean? Anyone who has a passing familiarity with blockchains will know that blockchains can’t guarantee an “uncensored truth” as they only show the transactions that validators committed to the chain. If censorship happened, we’re not going to find out about it, because it isn’t going to be there. “Tamper-evident” would be a more accurate description.

Furthermore, not all blockchains are ledgers, just as not all databases are ledgers.

D minus, Arizona. See me after class.

Then there’s Colorado, which doesn’t define “blockchains” but, in a bill about state records, just refers to them in plain English. Simple, and, if put in front of a judge, it probably works. Colorado also gets points for the zany title of its blockchain-aware legislation: “an Act Concerning the use of Cyber Coding Cryptology.”

Fabulous. A+.

THE FACT THAT CONNECTICUT LEGISLATORS FELT THE NEED TO COPY-PASTE OTHER STATES’ TERRIBLE DEFINITIONS REVEALS ONLY THAT THEY AND LEGISLATORS OF OTHER STATES HAVE ABSOLUTELY NO CLUE WHAT THEY’RE DOING. 

Connecticut — my home state — gets a solid F for its latest effort. The short story here is that someone managed to convince a member of the state house to introduce a bill that would abolish non-compete clauses in employment contracts where a “blockchain” company was one of the counterparties.

If you wish to see my testimony on the bill you may find it in full here. Apart from being very anti-business, the bill also proposes a definition of “blockchain” so broad that it would capture practically any contract with any employee of any company that employs distributed software architecture of any kind.

It defines “Blockchain Technology” as a

“distributed ledger technology that uses a distributed, decentralized, shared and replicated ledger that may be public or private, permissioned or permissionless and that may include the use of electronic currencies or electronic tokens as a medium of electronic exchange”. 

If you recognize this, it’s because you have seen something very close to it before in Arizona (and Rhode Island, New York, Tennessee and Michigan, among others). The fact that this definition is the law in Arizona doesn’t mean it’s correct.

A blockchain, as any informed person will tell you, is a hash-linked chain of blocks. If we wanted to be a little more specific, we might say “a hash linked chain of blocks that usually (a) uses digital signatures to authenticate transactions, (b) P2P networking protocols to communicate those transactions and (c) Merkle trees to render the transaction log tamper-evident.”

The Connecticut bill doesn’t do this. It continues by defining “Distributed Ledger Technology” as a critter which

may include supporting infrastructure, including blockchain technology, that uses a distributed, decentralized, shared and replicated ledger, whether public or private, permissioned or permissionless, and that may include the use of electronic currencies or electronic tokens as a medium of electronic storage.”

This definition is both duplicative and incorrect.

Not all distributed databases are distributed ledgers, despite the fact that this bill treats them as one and the same on a plain English reading. Not all distributed systems are “decentralized,” either, despite the fact that the bill defines a blockchain system as “distributed and decentralized.” Similarly, not all blockchain systems are decentralized.

The term “decentralized” itself lacks a uniform and concrete definition in both (a) industry and (b) under any law in any jurisdiction of these United States or indeed the world. “Decentralized” is an adjective, like “fluffy” or “happy,” and the word has no place in laws deciding what software should or should not be regulated by the government.

“Why should we care?” I hear you ask.

Well, the problem with a sloppy and overbroad definition is that sloppy definitions lead to sloppy and overbroad application on businesses that the drafters didn’t intend to capture.

Second, the fact that Connecticut legislators felt the need to copy-paste other states’ terrible definitions reveals only that they and legislators of other states have absolutely no clue what they’re doing. It’s like stealing an answer key to a test, only stealing the wrong key: if everyone makes the same mistakes, everyone’s probably cheating.

Third and finally, banning non-compete clauses in employment contracts for software firms is a great way to ensure that software firms stay out of your state, and Connecticut needs all the jobs it can get.

Summing up, state legislatures have proved only one thing with bills that define “blockchain” incorrectly: that they don’t understand the technology. Accordingly they shouldn’t be writing laws that regulate it.

Legislators passing “blockchain” laws should keep it simple in the operative text, add necessary context in the preamble, rely on the Golden Rule of statutory interpretation — that is, follow the literal meaning of the words in a statute, except where the result would be absurd — in case of disputes and leave it at that.

If states want to promote the use of blockchain tech, they need to be advised by people who possess a solid technical understanding of what they’re trying to legislate, the commercial issues involved in deploying that technology, how to speak clearly about both of those things, and who are independent and disinterested.

If the current laws on the books are any indication, the states have a lot of work to do.

Not Legal Advice, 20 Feb 2020 – The three-year token “Safe Harbor” proposal would be hilarious if it weren’t so serious

This is the latest installment of my column, Not Legal Advice, which now runs as a biweekly column on CoinDesk. As the name suggests, this is Not Legal Advice. Nothing I say is legal advice unless you have paid me a hefty retainer and signed an engagement letter. This installment of Not Legal Advice is the first to have run on CoinDesk. Read it here or read it below. Or don’t. It’s your life. Live it.

1) “CryptoMom” Hester Pierce proposes token safe harbor

Much ink has been spilled over the last six years about the extent to which U.S. securities laws can and should apply to the sales of cryptographic tokens by protocol developers.

The default position that a conservative law firm will follow is that in the U.S. the sale of a token by a protocol developer before a token network is launched is the sale of a security. Current Securities and Exchange Commission (SEC) policy appears to say that, in the life of any cryptocurrency, there will come a point when the token has been distributed to sufficiently many hands and the network’s architecture is sufficiently distributed – or as SEC corporate finance director Bill Hinman put it in 2018, “sufficiently decentralized – where purchasers would no longer reasonably expect a person or group to carry out essential managerial or entrepreneurial efforts,” and thus the token ceases to be a security.

SEC Commissioner Hester Pierce, aka “Crypto Mom,” thinks the government should facilitate startups that want to have a go at turning their definitely-are-securities-today into maybe-not-securities-tomorrow. She has proposed a safe harbor to achieve this, whereby token startups will be given a three year head start to take an ICO coin and turn it into a “decentralized” network, i.e. one which

“is not dependent upon a single person or group to carry out the essential managerial or entrepreneurial efforts… (such that) the tokens must be distributed to and freely tradeable by potential users, programmers, and… secondary trading of the tokens typically provides essential liquidity for the development of the network and use of the token.”

The three year safe harbor period will allow protocol devs time to

“facilitate participation in, and the development of, a functional and/or decentralized network, unrestrained from the registration provisions of the federal securities laws so long as [certain] conditions are met.”

In other words, under the proposal, crypto projects would be able to sell securities to the public and work towards “decentralization” by, among other things, selling still more of these securities and creating a robust market for these securities, in the hope that engaging in the sale and marketing of these securities will turn them into non-securities, despite the fact that they will function in the marketplace exactly as securities do today at all relevant times.

This proposal would be hilarious if it weren’t so serious.

The most significant issue is that the proposal relies on a standard for “decentralization” which isn’t entirely certain today. Although the SEC has “decentralization” guidelines in print, projects that appear technically indistinguishable receive differing regulatory treatment for reasons that, to industry experts, are not immediately apparent.

Take, for example, Eos, Sia, and Telegram. Eos claims to have raised north of $4 billion in a year-long, rolling ICO that kicked off with the purchase of billboard advertising in Times Square, New York, at the Consensus 2017 conference. Sia did an unregistered ICO also, raising roughly $150,000.

Telegram, by contrast, endeavored to sell its tokens to US persons via the Rule 506(c) exemption of Regulation D. At a predetermined future date, Eos’ and Sia’s presale tokens converted to live network tokens. At a predetermined future date, Telegram’s presale tokens were to convert to live network tokens.

Eos was fined $24 million, or about 60 basis points on $4 billion, and walked away, and its once-were-securities-but-I-guess-now-they’re-not coins continue to be listed on major exchanges. Comparatively smaller offender Sia was fined $250,000, or twice what they raised, and walked away. Telegram, by contrast, drew an emergency injunction in the Southern District of New York and the project has ground to a halt.

Of course, there are reasons why the SEC might be friendlier to some startups and less friendly to others. For example, startups that approach the SEC and cooperate will be treated more gently than those that do not. But, fundamentally, the real problem here is that the SEC’s “decentralization” test, as currently used, and as proposed to be used in the future, is unquantifiable to the point of being unconstitutionally vague.

There is no agreed statutory or technical definition of what makes a project more or less “decentralized.” Even highly technically competent (and prominent) developers and industry marketers cannot agree on a uniform definition of the term, which more often appears to be marketing-speak than as a definite, measurable quality, I struggle to see how the government should be in a better position to do so.

For this reason I would struggle to advise a client seeking to adhere to the “decentralization” test whether they are decentralized or not.

The only thing that is made clearer by this proposal is that, to paraphrase an industry colleague, “’blockchain technology’ and Mom & Pop investors don’t have lobbyists. Coinbase does.”

This proposal is fantastic for startups who need capital, market venues who need trading volumes to survive, and the lawyers who advise them. For this reason I don’t expect that many U.S. law firms will raise significant objections to this proposal which, if adopted, would almost undoubtedly be the single greatest creator of transactional legal work since the invention of securitization.

It would facilitate a headlong rush of issuers into the lightly-regulated crypto-capital markets as every company in the world sought to obtain American investors’ capital without selling them so much as a single basis point of equity or taking on a single dollar of debt, all without needing to sort out the details for 36 months.

If that’s the rule the SEC wishes to adopt and the result it wishes to bring about, that’s the Commission’s prerogative. I might suggest that a simpler approach would be for the government to approach tokens like it approaches Bitcoin: treat coins sold in an initial coin offering as something sold, a securities sale, and treat a mined coin as something made, a mere commodity, which will still allow for a great many experiments in blockchain tech to flourish without creating incentives for every company in America to launch its own token.

2) Crypto scam numbers on the rise

The Wall Street Journal reports on 8 February:

Seo Jin-ho, a travel-agency operator in South Korea, wasn’t interested in exotic investments when a colleague first introduced him to PlusToken, a platform that traded bitcoin and other cryptocurrencies. But the colleague was persistent…
His investment grew at a dazzling rate. He invested more—a lot more. In less than five months, he bought $86,000 of cryptocurrencies, cashing out only $500.

The story ends in a familiar way, with Seo Jin-ho losing all of the money he invested.

Crypto-analytics company Chainalysis estimates that after a fairly busy 2017 in which $1.83 billion was “invested” in crypto scams, 2018 was a quieter year. This is perhaps understandable given the noises that the SEC made from January through November.

In 2019, however, a staggering $3.99 billion – that’s billion with a B – was reportedly lost to crypto-investment scams. This suggests that regulatory intervention in 2018 was not aggressive enough to deter the continuing growth of “scam” activity.

Clamping down on scams is almost universally understood as an important prerequisite to mass adoption and acceptance of cryptocurrencies as a viable payment and financial services technology. When asking why investors seem so uniquely susceptible to crypto scams, it bears mentioning that each of the top ten coins in circulation was issued otherwise than through a regulated channel, with the SEC and Department of Justice, at least as far as the public is aware, declining to take action against ethereum, tether, XRP, litecoin, Binance Coin, bitcoin cash, bitcoin SV, and tezos, and taking a $24 million punt on EOS, despite there being identifiable promoters for each project (usually a notionally non-profit foundation but sometimes a for-profit entity).

The absence of an adequate regulatory regime means that a new “scam” project is virtually indistinguishable from one that has shed that label through accidental success. The marketing material for, say, ethereum and for any “scam” currency are primarily found on informal channels such as internet fora and Twitter promotional posts rather than in the form of an offering circular. The closest thing to “legitimacy” that any particular project can obtain is a listing on Coinbase or Binance, commercial actors with commercial interests that call for them to list and trade more coins in greater volumes, regardless of the gain or loss to investors.

A “safe harbor” that made it more difficult for retail investors to distinguish bona fide projects like Blockstack from known scams like OneCoin for a three-year period would likely undo much of the progress towards mainstreaming crypto adoption that has been made to date, which has seen large institutional players like Bakkt or Fidelity Digital Assets enter the space.

Not Legal Advice, 13 Feb 2020: UK prepares to nuke its tech industry with “Online Harms” proposals

This is not legal advice, which is why the blog post series is called “Not Legal Advice.” Unless you’re paying me, I’m not your lawyer. Proceed accordingly.

First, it’s 2020. It seems like yesterday it was 2006. Time flies.

Second, here’s the latest from the United Kingdom. As reported by the Independent:

Social media companies will now be regulated by broadcast watchdog Ofcom, giving it the ability to fine and police companies such as Facebook, Twitter and Instagram.

Digital media and culture secretary Nicky Morgan announced that under the new legislation tech companies would now be held accountable for the content on their platforms.

The new legislation means companies such as Facebook and YouTube will be judged on their “duty of care”, and be liable for exposing users to illegal or damaging content. Until now, companies including TikTok, Snapchat and Twitter have been for the most part been self-regulating.

tl;dr the UK government is planning to implement the Online Harms White Paper (the “White Paper”) it published in April of last year, despite the fact that the White Paper is dystopian and insane.

I could go chapter and verse about the UK Online Harms White Paper proposals, as some have done. However, I don’t want to do that, as much of the document, after a review, revealed itself to be a rationalization for implementing draconian Government control over digital speech. If you wish to read the document – which runs to 102 pages – you should. However, I can summarize the practical effects of the document very briefly as follows:

1) The British Government has a list of “harms” that it wants to expunge from the Internet

These “harms” fall into two general buckets.

First, there is a list of “harms with a clear definition” which will be banned. These include CSAM, immigration crime, extreme/revenge porn, harassment, hate crimes, encouraging suicide, inciting violence, and sale of illegal goods, that will all be banned.

By way of comparison, some of this content is illegal in the United States where most of these technology companies are based.

Some of this content is not illegal in the United States.

Using online platforms to advocate for things can be illegal in the UK under several statutes, including, inter alia, the “encouragement” offences under the Terrorism Acts (which can be committed recklessly). Generally speaking, however, advocacy is constitutionally protected in the United States. There are certain limits around material support for designated foreign terrorist organizations, but insofar as the domestic political situation is concerned advocacy falling short of incitement is fair game.

While the U.S. has a concept of “hate crimes,” “hate speech” – punishable in England by Sections 4, 4A, and 5 of the Public Order Act 1986, as amended, and the Communications Act 2003 for online communications – is not one of them. Simply uttering a hateful idea is squarely within the protection of the First Amendment – “the proudest boast of our free speech jurisprudence is that it protects ‘the thought that we hate,'” wrote Justice Alito in Matal v. Tam (2017) – and the same goes for printing it online. Mind you, the presence of hateful speech online might go to proving motive for some other underlying offense, e.g. if your speech is threatening, interfering with a candidate for elective office, or the like, which (in the case of hate crimes) is usually an aggravating factor which is considered at the sentencing phase.

But hate speech per se is not a crime in these United States. Hate speech per se is, under several different content-based speech statutes, capable of being a crime in England.

Similarly, “incitement” as such is not necessarily illegal to the extent that the incitement is sufficiently remote from the possibility of actual violence being carried out. The applicable rule comes from the “imminent lawless action” test set down by Brandenburg v. Ohio, for example, and advocating in favor of violence or encouraging suicide can be constitutionally protected. Iin the latter case, in most cases but not all – see, e.g., the manslaughter conviction of Michelle Carter, which – surprisingly – was denied cert in SCOTUS last month.) This is down to the fact that the U.S. First Amendment was designed to abolish forever English political crimes like seditious libel and Scandalum Magnum (an ancient fake news misdemeanor that was seldom used, as it required the prosecution to prove the publications were false – which seditious libel did not require).

U.S. technology companies are not obliged to take down or remove illegal material, subject to narrow and specific statutory exceptions, and are immune for its existence on their servers as long as they do not “materially develop” the content, due to the operation of Section 230 of the Communications Decency Act. That notwithstanding, they are obliged to respond to legal process from law enforcement and/or civil litigants when served with it. Accordingly tech companies with US operations are well placed already to answer legal process in relation to suspected offenses of these types and will have routine correspondence with state and federal law enforcement to respond to subpoenas, search warrants and emergency disclosure requests.

Companies will also have efficient means in place to deal with CSAM. When an interactive computer services provider in the U.S. detects CSAM, they’re already subject to a mandatory reporting obligation to the National Center for Missing and Exploited Children (“NCMEC”, pron. “Nick-Mick”) and must put in place a legal document hold for 90 days pending receipt of legal process from the FBI or other law enforcement agency.

Second, there is a list of  “harms with a less clear definition,” including cyberbullying, trolling, extremist content, “disinformation,” violent content and, advocacy of self-harm.

These categories of speech are, generally speaking, not subject to prior restraint in the United States and in some cases are in fact protected speech under the First Amendment to the U.S. Constitution.

In some cases (bullying, intimidation) the position is slightly muddier as we have to ask when this type of conduct crosses the line from free speech into a common law offense like stalking or threatening, and there may be private causes of action available to the victim (libel, emotional distress, intrusion upon seclusion) which can also be wielded by the victim in a court of law against the perpetrator.

Screen Shot 2020-02-13 at 1.02.28 PM.png

2) The British Government will force tech companies to police these “harms,” even where they are legal in England, and including many “harms” that are totally legal in the United States

As things currently stand, for the most part – aside from the mandatory reporting obligation mentioned above – US tech companies are not required to police user content. They’re required to respond to legal process from US courts. They’re required to respond to subpoenas that have been domesticated in a state where they can be served. Apart from that, they’re basically free to let their users sling whatever shit they want to at one another and are immune from civil liability for doing so under Section 230 of the Communications Decency Act, which I explain here.

The British government has decided

The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.

Compliance with this duty of care will be overseen and enforced by an independent regulator.

All companies in scope of the regulatory framework will need to be able to show that they are fulfilling their duty of care. Relevant terms and conditions will be required to be sufficiently clear and accessible, including to children and other vulnerable users. The regulator will assess how effectively these terms are enforced as part of any regulatory action.

The regulator will have a suite of powers to take effective enforcement action against companies that have breached their statutory duty of care. This may include the powers to issue substantial fines and to impose liability on individual members of senior management.

This proposal is actually very similar to certain provisions in the (likely unconstitutional) proposals being promulgated in the U.S. by Senator Lindsey Graham and Attorney General Bob Barr, cynically named the “EARN IT” Act.

People and companies complained. The British replied:

The Online Harms White Paper set out the intention to bring in a new duty of care on companies towards their users, with an independent regulator to oversee this framework. The approach will be proportionate and risk-based with the duty of care designed to ensure companies have appropriate systems and processes in place to improve the safety of their users.

The White Paper stated that the regulatory framework will apply to online providers that supply services or tools which allow, enable or facilitate users to share or discover user-generated content, or to interact with each other online. The government will set the parameters for the regulatory framework, including specifying which services are in scope of the regime, the requirements put upon them, user redress mechanisms and the enforcement powers of the regulator.

The consultation responses indicated that some respondents were concerned that the proposals could impact freedom of expression online. We recognise the critical importance of freedom of expression, and an overarching principle of the regulation of online harms is to protect users’ rights online, including the rights of children and freedom of expression. In fact, the new regulatory framework will not require the removal of specific pieces of legal content. Instead, it will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.

To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that may not be illegal but has the potential to cause harm, such as online bullying, intimidation in public life, or self-harm and suicide imagery.

Couple of things going on here.

First, the British government claims that they’re walking the proposal back because they promise to only police illegal speech and will let be other types of legal speech, e.g. “disinformation” and “trolling.” (TBD pending draft regulations and codes of practice.) The problem of course is that British speech codes are so vaguely drafted that any speech which is even mildly offensive can be, and is, caught within the definition of “illegal content.” There are reported cases where reading from the works of Winston Churchill or a Bible verbatim has been enough to result in arrest. We’re not dealing with a free country here.

The Public Order Act 1986, Malicious Communications Act 1988, Communications Act 2003, Terrorism Act 2000, Terrorism Act 2006, and Racial and Religious Hatred Act 2006, Part 3 would all be struck down in the United States, either for not being content-neutral, overbreadth, or vagueness (see e.g. the ratio of Norwood v. DPP, which is the current state of the law, versus the ratio in DPP v. Redmond-Bate, which preceded Norwood, was how the law on offensive speech stood in 1999, and was arguably overturned by Norwood). We have seen, time and again and as I expand on more fully here, what would be fairly inoffensive or even benign speech in the U.S. draw a conviction from an English magistrates’ court which is upheld on appeal.

Second, although the framework “does not require the removal of,” i.e. does not create a regime of mandatory takedown orders for, legally compliant content (subject of course to the proviso that virtually any offensive speech is capable of being illegal in the UK), the framework does not need to create a mandatory takedown regime for the British government to be able to force companies to remove legal content of which it disapproves. This has the advantage of being more plausibly deniable (copies of specific orders saying “take down this post” signed by an OFCOM official has a “Ministry of Love” vibe to it and won’t look good in the press, s a policy manual saying “this type of post is harmful” won’t offend the nanny staters quite as much).

To the extent that a code of practice adopted by OFCOM penalizes social media companies for hosting speech which is highly offensive but not illegal, social media companies will be obliged to remove the content if they wish to avoid the British penalty.

See e,.g. the UK’s Counterterrorism Internet Referral Unit, or CTIRU, operated by the Met. CTIRU sends notifications to interactive computer services providers of content the British government considers illegal under antiterrorism laws, generally extreme political content. CTIRU does not, however, issue process e.g. search warrants or orders e.g. RIPA notices with a view to ascertaining the identity of the sender and enforcinfg the law in relation to that content.

CTIRU is a censor. The consequence of the notification is that the provider could be held liable in a British court for the content; the e-commerce directive provides similar coverage to online service providers as the US Section 230, but there is a proviso under Art. 14(a)(1) of the e-commerce directive that “actual knowledge” of illegal content removes that immunity.

So by conceding that the

new regulatory framework will not require the removal of specific pieces of legal content

…nothing has really been conceded at all. As we said above, it’s not hard for a British prosecutor to argue that speech which offends – no matter the content – is illegal. And a promise that Ofcom won’t be able to compel the removal of  specific “offensive but legal” content is not the same thing a promise by the government that it won’t allow OFCOM to penalize social media companies for hosting it. Although the existence of penalties for failure to comply with the Online Harms regime plus notification under Article 14(a)(1), which still applies in Britain during the transitional period should not constitute a “political content takedown order,” it should effectively amount to a strong political takedown request or suggestion, with penalties possible if enough of such suggestions are ignored.

Third, it is still unclear what obligations companies will have to actually comply with. The Government says that

regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that may not be illegal but has the potential to cause harm

but of course, if we look back to the White Paper, we’re not going to know what those obligations are for some time. It seems that any Bill will delegate most of the authority for developing these responsibilities to OFCOM, the British telecommunications regulator, which will then

[set] out what companies need to do to fulfill the duty of care, including through codes of practice” and take “prompt and effective enforcement action in the event of non-compliance (as set out in Chapter 6)”.

These powers may include, inter alia, the power to levy fines, compel additional information regarding the breach of the practice code, compel “third party companies to withdraw any service they provide that directly or indirectly facilitates access to the services of the first company, such a search results, app stores, or links on social media posts,” mandatory ISP blocking, and creating new crimes for failure to obey OFCOM’s diktats.

This is truly Orwellian. The British government is suggesting it should have the power to order companies falling under its jurisdiction to destroy any other company which refuses to obey British content standards but is following the content standards of its home jurisdiction (otherwise it would have been shut down already by domestic authorities).

If enacted, this should be a frontal attack on the First Amendment.

3) Global enforcement will be complicated and likely ineffective

Speaking as one who advises small companies, all of this compliance is going to be extremely burdensome and make the U.S. look like a much more attractive place to open up shop online (which it is already, but will be more so if these proposals are implemented).

Much of this will be hard to enforce. The real worst-of-the-worst baddies will not wind up using services like Facebook but will likely wind up running their own metal and rolling out their own cryptosystems. (Baddies using mainstream services give away their IP, user agent strings, and other identifying data which makes them easy to find.)

Since the baddies can migrate off of Facebook as easily as one logs into another service using OAuth, the only real, effective purpose of this proposal is to turn OFCOM into a morality regulator online that makes social media firms enforce British legal conventions on speech and conduct… without the British having to expend police and court time and resources to get the desired result. As more and more decentralized content providers e.g. ActivityPub or LBRY crop up it will be impossible to find a corporate entity to hold to account for web content which will be sharded and stored overseas.

If enacted it is therefore likely to only affect essentially law-abiding but politically edgy British domiciliaries, like Count Dankula, using British services or US services that are big enough to want to maintain corporate presence in the UK. So all of the big players, but by no means most of the players in numerical terms.

Companies in the United States don’t have to obey British court orders. To the extent a British regulator sought domestication of a British regulatory determination or court order in a U.S. state such that it would become binding on a U.S. person once served, under no circumstances would a British determination or court order survive that process if it were unconstitutional.

Most of the Online Harms regime, as proposed, would be unconstitutional on its face and virtually all of the Online Harms regime, as proposed, is likely to be unconstitutional as-applied. Orders issued thereunder and penalties levied will therefore be unenforceable before U.S. courts (in e.g. an MLAT procedure or where seeking to enforce a money judgment).

My prediction is that many online companies will choose to re-domicile or withdraw from the UK before subjecting themselves to this hugely burdensome regulatory regime. If this regime is enacted, OFCOM will essentially attempt to serve as the world’s morality police; it will not, however, have any power outside of the UK’s territorial boundaries.

If the UK feels like destroying its tech industry with burdensome regulation and extremely labor-heavy (and legal advice-heavy) compliance obligations, go right ahead, knock yourselves out. More billable hours for me.

Two facts belie the stated purpose of the proposal, to “impose a duty of care to protect social media users from online harms.”

  • First, the UK government and its counterparts in the USA already have adequate powers to address serious crime.
  • Second, both governments have inadequate powers to restrain trolling and offensive political rhetoric – in the US because of the First Amendment, and in the UK, because there are not enough prosecutors and police to investigate, try, and convict every Internet troll that violates the provisions of the Communications Act 2003.
  • Third, trolling and offensiveness are what users want. if users didn’t want to encounter trolls and edgy politics on the Internet, they would not be on the Internet and have social media accounts. If they dislike the experience they are perfectly capable of either logging off or using their block buttons, or in extreme cases, bringing an action against other internet users.

So we see the purpose of the proposal is not to restrict content to protect the people, who are perfectly capable of protecting themselves. It is to protect the state. In particular it serves to protect those programmatic objectives of the state which are most subject to vitriolic criticism on the Internet, as well as adjacent “offensive” content, in relation to which prosecutors will use the broad discretion granted to them under English speech codes to suppress and terrorize anyone who dares possess and express a controversial, irreverent, or iconoclastic thought.