Open access publishing platforms and unlawful threats

Screen Shot 2019-03-22 at 4.03.46 PM.png
Image credit Pixabay. Licensed under the Pixabay Licence.

In the last week or so, a number of would-be and actual criminals have abused open-access social media to disseminate their messages or issue terroristic threats, in particular on the 4Chan and 8Chan imageboards.

Around the world, the response from non-U.S. governments has been to censor sites with a free speech emphasis – 8Chan, 4Chan and ZeroHedge (but notably, not Facebook) have been banned by New Zealand and Australian ISPs. Canada joined them in calling for increased regulation. For the better part of the current Parliament, Britain has been considering the Online Forums Bill which would make platform providers liable for the content posted by their users. What all these solutions share is a common emphasis of making communications platforms responsible for the content shared thereon, and the creation of automated censorship programs at each of these companies to prevent any possibly offensive content from making it from draft form to publication.

In the U.S., by contrast, the First Amendment to the U.S. Constitution prohibits such prior restraints on speech, and Section 230 of the Communications Decency Act renders social media platform providers more or less absolutely immune from the speech torts or speech crimes of their users. Accordingly, because there can be no legal government response to “hate speech” online, the response from political activists in the U.S. has been to seek to convert ordinary folks to a pro-censorship position.

Usually, this advocacy takes the form of proposing to ban the social media companies whose platforms were used to issue the threatening language. Take, for example, this blisteringly hot take from New York Times Opinion writer Charlie Warzel:

These takes are very poor, and show that those who write them don’t actually understand the problems with data protection, data disclosure, and social networks. And in a few headings I’m going to explain why.

1. Posting threats is already illegal, and banned on most (if not all) social media platforms

I’m admitted to practice in two countries. In both of them, posting a terroristic threat on the Internet will result in the speaker being arrested. I know of no website which says it wants this material.

2. The wheels of justice grind slowly, but they do grind onward

Western society is not a panopticon, at least not yet, so we do not have the ability to intercept a message midflight, semantically analyze it and immediately have men with guns and black helicopters converge on the user who posted it.

Nor should we wish to have that ability. Critics’ emphasis on First Amendment-type issues, as many commentators (and indeed countries) have in the wake of these incidents, ignores the fact that to stop an unlawful post from finding its way to the surface, either (a) all open-access publishing platforms would need to be eliminated and every single post would need to be monitored and assessed for legal compliance before going live, or (b) you would need to effectively brainwash every Internet user in the world so that none of them were ever inclined to post a threat on the Internet.

An early-intervention approach to speech would be incompatible with not just conceptions of free speech, but also the broad social consensus of how our system of laws is meant to work. Our laws serve a deterrent function: if you post violent threats online, then you are liable to be sentenced to a lengthy term of imprisonment. This approach to controlling and punishing speech places the onus of legal compliance on an internet user, rather than the platforms they use, and maximizes the freedom of compliant internet users to say what they like, at least insofar as those users stay within well-defined legal boundaries.

The downside to the deterrent approach is that human beings have a very warped conception of time-preferences, i.e., they lack the discipline to understand that consequences in the future will arise from being naughty in the present. Time preference issues are why, for example, I just ate a bagel for lunch when I know I need to lose ten pounds, or why, in a more extreme example, people commit violent crime to settle a street vendetta without considering that doing so can incur a lengthy term of incarceration.

This is also why, at least to the extent I have considered this issue, people post violent threats online. There’s some itch that these people need to scratch, psychologically, so they scratch it, consequences be damned.

Government responses in the aftermath of such an event will be deeply unsatisfactory to almost everyone involved, starting with the victims. Justice can be slow. But the delay between the issuance of a written threat and police response, given how our justice system works, is not so much a First Amendment issue as it is a Fourth Amendment and privacy rights issue. After all, as put in A Touch of Evil, “a policeman’s job is only easy in a police state.” After a threat is detected, most jurisdictions have procedures for pulling down user data from social media companies – in the case of the UK, for example, a RIPA request – and in the case of the US, a search warrant or subpoena issued pursuant to the federal Stored Communications Act.

Some of these statutes act to affirmatively protect user data; the Stored Communications Act, for example, prohibits companies from handing over user data to government agencies unless they’ve either been served with a court order, or believe in good faith that an emergency threatening serious bodily injury or life exists.

Recent events are not the first time that online imageboards have been used for this purpose. If we’re using 4Chan as an example, see e.g. the arrest and 2008 conviction of Jake Brahm for posting a deranged attempt at “copypasta,” a type of long-form text-based Internet meme, or more recently the 2018 arrest of an Indianapolis man for threatening to attack participants in an “alt-lite” free speech rally in Boston in 2017.

3. “Censorship” is a lazy default answer for writers who don’t know how cross-border data protection issues actually work

Twitter is not the world.

When you ban someone from a social network, they do not disappear. When you censor ideas from a social network, they are not forgotten.

Legislative proposals that put censorship front-and-center ignore the fact that there have always been, and there will always be, high time-preference individuals living among us who are willing to break our social compact irrespective of the long-term personal cost to themselves.

Every day, in every modern country, our fellow citizens rob people at knifepoint/gunpoint in our cities. They steal. They beat their children. Our fellow citizens do horrible things, to each other and even to themselves, with the foreknowledge that doing those horrible things will likely result in legal sanction. Unlawful threats have been part of that equation since time immemorial. Banning publishing platforms for the sins of a few sporadic users would be like banning sidewalks because muggers rob people on sidewalks.

In terms of finding workable solutions, the answer is not to ban the publishing platforms but rather to make our law enforcement systems mutually intelligible. Such an approach will protect honest citizens and not erode civil liberties while also rendering law enforcement a more credible deterrent to violent online threats.

Currently, it difficult for U.S. companies to share stored communications with law enforcement agencies overseas, because they are prohibited from doing so by federal data privacy law. Only in emergency situations can disclosure be made. In non-emergency scenarios, foreign law enforcement agencies seeking U.S.-based user data must go through a months-long Mutual Legal Assistance Treaty, or MLAT, procedure to ensure overseas requests comply with U.S. constitutional requirements around due process and freedom of speech.

Because the U.S.’ free speech rules are more pro-speaker than most if not all countries on the planet, overseas governments – particularly New Zealand, Australia, Britain, and Canada – are the loudest in calling for censorship of American-run web businesses. These calls for censorship strike me as a transparent attempt to do an end-run around U.S. privacy and free speech protections to exert an influence over the content and tone of the Internet, a global system that no country can, on its own, control.

How, then, do we empower law enforcement globally to combat online threats without using tech companies as de-facto content censors? The answer is not to take people’s rights to speak away from them. Rather, it’s to give people more free speech rights in places like Australia and New Zealand. Doing so would alleviate U.S. political concerns that, e.g., are likely to prevent the entry into an executive agreement for data sharing with third countries under the CLOUD Act, such as the proposed agreement with the UK, as the CLOUD Act requires data sharing arrangements between countries to “afford [as] robust substantive and procedural protections for privacy and civil liberties in light of the data collection and activities of the foreign government” as there would be domestically in the United States before .

In my professional opinion as a lawyer admitted in both England and the United States, the Commonwealth countries and the UK, which follow the UK’s constitutional pattern of constitutional monarchy and not the U.S.’ constitutional pattern of etching Enlightenment values in stone, do not adequately provide those protections.

By raising the baseline for free speech rights overseas, foreign governments would make it considerably easier to obtain information from U.S. companies to pursue the worst offenders. What they would give up, in exchange, is the ability to prosecute those who espouse uncomfortable but non-threatening wrongthink at home that is legal in the United States, such as the speech of Catholic journalist Caroline Farrow who was placed under investigation by the British police last week.

Entrusting our citizens with liberty, and making them face the consequences of violating that trust, is consistent with the English-speaking world’s most cherished political traditions. Granting Commonwealth citizens more rights is the prerequisite to enacting effective data sharing arrangements with the United States that will not face objections from U.S. companies and will help to address modern online threats. Taking people’s rights away will result in further incompatibility between the U.S.’ and other countries’ legal systems, and deny law enforcement the ability to mount effective and rapid responses to cross-border criminality.

Such a solution doesn’t enjoy the best “woke” optics. But it benefits from the fact that it might actually work.

Facebook’s new 10-digit security hole

On Friday, we learned:

In so doing, Facebook has just created a massive security hole which exposes every single one of its users to life-alteringly shitty hacks. I’m frankly astonished nobody internally at that company thought about this before pushing this feature.

“What’s the issue?” I hear you ask. The issue here is that your average workaday user who is even a little security minded will not only use their cell phone to do two-factor authentication for their Facebook login, but will also use the same cell phone for every other two-factor login or password recovery system they have, including, for example, their e-mail account or their bank. This is not an intelligent approach to security, as using cell phones for two-factor authentication is, to put it mildly, not even remotely secure.

“How so?” You inquire. Well, the answer is because cell phone companies are run by idiots when it comes to security, so even if you leave specific instructions with your provider to not port your SIM without a PIN and photo ID, smooth-talking criminals can still convince telco employees to do it anyway, with the result that the crook obtains control of your phone number – and can receive any communications sent to it.

This is not a theoretical problem. Cast your mind back to mid-2017, coming off the back of the Bitcoin boom. One day, I get a really weird Twitter message from my friend @twobitidiot, aka Ryan Selkis, asking me if I can lend him some Bitcoin.

Now, as Ryan knows, I am probably the filthiest nocoiner – i.e. non-Bitcoin investor – in existence, in large part because (a) when I got into crypto I was poor and young and (b) 100% behind permissioned blockchain implementations, which the startup I co-founded invented. Investing in shitcoins would have been uncouth, a betrayal to my most deeply-held values and firm belief that global, systemically-important financial institutions love us and want us to prosper.

I was naturally suspicious of his inquiry. I had good reason to be:

This story was repeated over and over again last year. People got their phone numbers ported. The hackers logged in to all of their accounts. The hackers took all of their stuff. Lather, rinse, repeat.

Nobody has really gotten to the bottom of how these phone numbers were ported with such laser-like efficiency. Personally, I think Facebook’s service played a part. At the time, I remember that I and others were getting bombarded with friend requests from slick-looking fake CEOs with good hair claiming to helm fake startups in SE Asia. As a general rule, I don’t add people on Facebook who I haven’t met. Other people do, and a slick CEO of an edgy tech startup is a great person to make friends with, especially for folks in crypto looking to expand their networks. As these friend requests rolled in, they began to look increasingly credible as more and more crypto people I know appeared to be “friends” with these accounts.

Meaning that if crypto people had posted their cell phone numbers as “friends-only” or “friends of friends” on their accounts, the fraudsters had their numbers, too, and could start creeping their way towards the bit/shitcoin hoards these people were thought to hold on crypto exchanges and the like. This is some serious business.


Which brings us to the problem of Facebook making cell phone numbers searchable by default, even to a user’s friends only or “friends of friends,” even when the user wants to keep their phone number private (the “only me” setting). (Edit: the cell phone lookup is set to be shared with “everyone” by default, which is crazy; not that the most restrictive, friends-only, search function is protective enough, since fraudsters can and do find their way onto “friend” lists.)

Due to this, to be blunt, Facebook’s new search feature will allow fraudsters to use Facebook to verify the identities of cell phone subscribers, even where Facebook users have locked down their cell phone numbers on their profiles to avoid this very outcome. In permitting anyone to search cell phone numbers, Facebook has compromised the security of every individual user of its service in the name of convenience.

All someone needs to do, conceivably, to exploit this new “feature” from Facebook is to punch in random cell phone numbers until they hit paydirt and discover a corresponding identity. If the user isn’t particularly security-minded, they’ll have birthdates and addresses publicly viewable, too. After the target is identified, the hacker simply calls up the user’s cell service provider, and social engineers a SIM port. Boom. All SMS-based 2FA that person used with that number, on any service, is now compromised. Including the 2FA for the user’s Facebook account.

There are a couple of solutions a Facebook user can adopt, in the meantime, to help ameliorate this issue. One option is to remove your phone number and not use SMS 2FA, or switch to a service like Google Voice that is not susceptible to social engineering. Another is lock down the settings to the extent you can (searchable to friends-only) and hope that (a) your friends don’t get hacked and (b) that you haven’t friended anyone accidentally who is a hacker or a fake, which – at least for some of my buddies in crypto – is a day late and a dollar short.

What these solutions share is that most of Facebook’s userbase is blissfully unaware of the risks of SMS-based 2FA, so they won’t take these measures or won’t implement them effectively.

I’m pretty sure I’m not wrong about this, but if I am, I’ll be happy to discuss it on Dissenter. It strikes me that the engineering boffins over at FB are – not being cryptogeeks – almost totally blind to the risk they’ve just created for hundreds of millions of users as a result of SIM porting. It also strikes me that the best way to address that risk is to kill the feature.

After they do, we all need to seriously re-evaluate our relationship with any interactive service that asks us for our mobile phone numbers before we can use it, if a company of Facebook’s size can make an error so elementary that a lawyer who can barely program “hello world!” in Python picked up on it, but all their engineers and security professionals didn’t.

Ethereum is (arguably) doomed to be centralized

I will preface this blog post by saying that my aim here is to set out and list some suggestive, not definitive, evidence I see of increased centralization in the Ethereum ecosystem. It sets out a hunch, not a mathematical proof. If you disagree with my opinion, that’s your prerogative. If you want to convince me otherwise, go dig up the hard data and prove me wrong.

So. Is Ethereum centralized? Isn’t it? The answer, I suspect, depends on who you ask. In the spirit of generating debate, recently I have tweeted about the increasing centralization seen in the Ethereum cryptocurrency ecosystem. See, e.g., the following hard-hitting and dynamic, yet tender and, somehow, ineffably, heartwarming contribution of mine to the corpus of Twitter crypto literature:

Ethereum people were none too pleased:

For those of you who are annoyed with me for this series of tweets, please understand two things.

First, when I start referring to the legendary feats of exploration undertaken by the Marmot Star Empire, that’s generally a good sign that I am pulling your leg.

Second, the blanket assertion that Ethereum is a decentralized system, accepted as gospel by most of the Ethereum ecosystem is, at the very least, arguable. There might have been a time in the past, say 2015-16, where the network could have tolerated the loss of a large number of rank-and-file nodes, selected at random, without much of an impact on the network’s overall functioning. Today that is no longer true.

I would have eventually sat down to write a blog post on the subject but, very fortunately, Twitter user @PaulApivat took the time to read my tweets and summarized them for me in his very considered reply which we should all read. Paul more or less boils down the “Eth is centralized” argument into five pillars:

  1. Ethereum is reliant on a handful of private companies to survive.
  2. Block reward cuts can be agreed seemingly without objection.
  3. Tokens likely remain in few hands, and accordingly so is ecosystem influence.
  4. Three entities can collude to reduce mining rewards.
  5. Infura dominates the market for node infrastructure available to developers.

Which I would, if starting from scratch, condense down to four:

  1. Tokens. The pre-mine looks suspicious as hell. Concentration of large amounts of Ether wealth grants the holder of that wealth outsize influence over the supply of the coins that can be brought to market, including the ability to crash the currency. As put by Muad’dib, “the power to destroy a thing is absolute control over it.”
  2. Nodes. The fact that Ethereum has not solved scaling means that centralized service providers, currently Infura, exercise outsize influence over node infrastructure. This is because an archive node now pushes 2 TB in size. The fact that everyone relies on Infura for the system to work, combined with the inability of core devs to find credible scaling solutions, means node counts are falling quickly – and the result is effective centralization in Infura’s hands (which at the end of the day is really just repackaged Azure). (Note, failure to solve scaling is in the interests of the centralizers as it favors them ergo they don’t care about finding a solution. Perhaps this is an accident, perhaps not, but it’s difficult to say for sure from outside.)
  3. Clients. There are 13 (or more), but the vast majority of nodes run one of two (Geth or Parity).
  4. Too-easy alignment of interests and too-rapid decisionmaking. Major changes like adjustments to mining rewards – changes which would be anathema in other, more longstanding competitors like Bitcoin – are quickly agreed with no objections on the part of major ecosystem players. A lack of public disagreement for changes on that scale makes it likely that those changes are informally agreed before they are formally proposed.

So is Ethereum centralized or decentralized?

I don’t know. But then again, neither do the folks who vociferously assert that Ethereum is the great, decentralized World Computer.

Setting definitional problems to one side (what does “decentralized” actually mean?) I think it is still possible to have a productive discussion about this system based on the commonly-held understandings of “centralized” and “decentralized” among cryptocurrency users and observers.

Earlier in my career, I did a stint in anti-trust litigation. During that time I learned that collusion, where it occurs, is not always apparent to the end-consumer, and is, every single time, informal and unwritten.

The evidence, as I see it, raises red flags that there may indeed be a lot more centralization in the Ethereum ecosystem than anyone realizes.

This will be an unpopular view, especially among Ethereum people, many of whom are my friends. I do not care. Nobody owes the Ethereum ecosystem an obligation to take Ethereum cheerleader-marketers at their word when they tell us that Ethereum is decentralized, or when they say that Ethereum is capable of delivering on promises which assume that Ethereum’s approach to decentralization both works today and is capable of scaling up in the future (see e.g. garbage claims like those made for Plasma, a layer 2 solution endorsed by Vitalik and oft-touted by boosters which claims to enable “billions of transactions per second”).

Screen Shot 2019-01-18 at 3.20.34 PM.png

Truly outrageous claims have been made for Ethereum over the years. The claims are so numerous and diverse that a complete exposition of them does not bear repeating here. But extraordinary claims require extraordinary evidence. And at the moment, even Ethereum’s most basic claim – that it is “decentralized” – should be considered at least somewhat in doubt. Only

  • hard-hitting analysis aimed at determining whether collusion has occurred or is occurring in relation to major proposed protocol changes,
  • transparency over the extremely mathematically sketchy pre-sale process,
  • an honest discussion about the fact that Ethereum can’t handle anything approaching normal daily user traffic for a mediocre web app, and
  • more honest discussions about Ethereum’s continuing failure to scale and the likely centralization that is required for Ethereum to continue operating normally under these conditions

will help us get to the truth.

In increasingly greater numbers, reasonable people aren’t buying Ethereum’s lofty pitch. If Ethereum doesn’t like that and is looking for someone to blame, it need only look in the mirror.