This is a follow-up to a post from earlier this year discussing the likelihood of encountering two identical packs of Skittles, that is, two packs having exactly the same number of candies of each flavor. Under some reasonable assumptions, it was estimated that we should expect to have to inspect “only about 400-500 packs” on average until encountering a first duplicate.
So, on 12 January of this year, I started buying boxes of packs of Skittles. This past week, “only” 82 days, 13 boxes, 468 packs, and 27,740 individual Skittles later, I found the following identical 2.17-ounce packs:
I purchased all of the 2.17-ounce packs of Skittles for this experiment from Amazon in boxes of 36 packs each. From 12 January through 4 April, I worked my way through 13 boxes, for a total of 468 packs, at the approximate rate of six packs per day. This…
In the last week or so, a number of would-be and actual criminals have abused open-access social media to disseminate their messages or issue terroristic threats, in particular on the 4Chan and 8Chan imageboards.
Around the world, the response from non-U.S. governments has been to censor sites with a free speech emphasis – 8Chan, 4Chan and ZeroHedge (but notably, not Facebook) have been banned by New Zealand and Australian ISPs. Canada joined them in calling for increased regulation. For the better part of the current Parliament, Britain has been considering the Online Forums Bill which would make platform providers liable for the content posted by their users. What all these solutions share is a common emphasis of making communications platforms responsible for the content shared thereon, and the creation of automated censorship programs at each of these companies to prevent any possibly offensive content from making it from draft form to publication.
In the U.S., by contrast, the First Amendment to the U.S. Constitution prohibits such prior restraints on speech, and Section 230 of the Communications Decency Act renders social media platform providers more or less absolutely immune from the speech torts or speech crimes of their users. Accordingly, because there can be no legal government response to “hate speech” online, the response from political activists in the U.S. has been to seek to convert ordinary folks to a pro-censorship position.
Usually, this advocacy takes the form of proposing to ban the social media companies whose platforms were used to issue the threatening language. Take, for example, this blisteringly hot take from New York Times Opinion writer Charlie Warzel:
we’ve long viewed and tolerated these spaces as ungovernable and toxic experiments of maximalist free speech. i’m just curious if the calculus changes for ppl when the community has outsize violent or threatening impact on ppl out in the world. like, say keeping kids from school?
These takes are very poor, and show that those who write them don’t actually understand the problems with data protection, data disclosure, and social networks. And in a few headings I’m going to explain why.
1. Posting threats is already illegal, and banned on most (if not all) social media platforms
I’m admitted to practice in two countries. In both of them, posting a terroristic threat on the Internet will result in the speaker being arrested. I know of no website which says it wants this material.
2. The wheels of justice grind slowly, but they do grind onward
Western society is not a panopticon, at least not yet, so we do not have the ability to intercept a message midflight, semantically analyze it and immediately have men with guns and black helicopters converge on the user who posted it.
Nor should we wish to have that ability. Critics’ emphasis on First Amendment-type issues, as many commentators (and indeed countries) have in the wake of these incidents, ignores the fact that to stop an unlawful post from finding its way to the surface, either (a) all open-access publishing platforms would need to be eliminated and every single post would need to be monitored and assessed for legal compliance before going live, or (b) you would need to effectively brainwash every Internet user in the world so that none of them were ever inclined to post a threat on the Internet.
An early-intervention approach to speech would be incompatible with not just conceptions of free speech, but also the broad social consensus of how our system of laws is meant to work. Our laws serve a deterrent function: if you post violent threats online, then you are liable to be sentenced to a lengthy term of imprisonment. This approach to controlling and punishing speech places the onus of legal compliance on an internet user, rather than the platforms they use, and maximizes the freedom of compliant internet users to say what they like, at least insofar as those users stay within well-defined legal boundaries.
The downside to the deterrent approach is that human beings have a very warped conception of time-preferences, i.e., they lack the discipline to understand that consequences in the future will arise from being naughty in the present. Time preference issues are why, for example, I just ate a bagel for lunch when I know I need to lose ten pounds, or why, in a more extreme example, people commit violent crime to settle a street vendetta without considering that doing so can incur a lengthy term of incarceration.
This is also why, at least to the extent I have considered this issue, people post violent threats online. There’s some itch that these people need to scratch, psychologically, so they scratch it, consequences be damned.
Government responses in the aftermath of such an event will be deeply unsatisfactory to almost everyone involved, starting with the victims. Justice can be slow. But the delay between the issuance of a written threat and police response, given how our justice system works, is not so much a First Amendment issue as it is a Fourth Amendment and privacy rights issue. After all, as put in A Touch of Evil, “a policeman’s job is only easy in a police state.” After a threat is detected, most jurisdictions have procedures for pulling down user data from social media companies – in the case of the UK, for example, a RIPA request – and in the case of the US, a search warrant or subpoena issued pursuant to the federal Stored Communications Act.
Some of these statutes act to affirmatively protect user data; the Stored Communications Act, for example, prohibits companies from handing over user data to government agencies unless they’ve either been served with a court order, or believe in good faith that an emergency threatening serious bodily injury or life exists.
Recent events are not the first time that online imageboards have been used for this purpose. If we’re using 4Chan as an example, see e.g. the arrest and 2008 conviction of Jake Brahm for posting a deranged attempt at “copypasta,” a type of long-form text-based Internet meme, or more recently the 2018 arrest of an Indianapolis man for threatening to attack participants in an “alt-lite” free speech rally in Boston in 2017.
3. “Censorship” is a lazy default answer for writers who don’t know how cross-border data protection issues actually work
Twitter is not the world.
When you ban someone from a social network, they do not disappear. When you censor ideas from a social network, they are not forgotten.
Legislative proposals that put censorship front-and-center ignore the fact that there have always been, and there will always be, high time-preference individuals living among us who are willing to break our social compact irrespective of the long-term personal cost to themselves.
Every day, in every modern country, our fellow citizens rob people at knifepoint/gunpoint in our cities. They steal. They beat their children. Our fellow citizens do horrible things, to each other and even to themselves, with the foreknowledge that doing those horrible things will likely result in legal sanction. Unlawful threats have been part of that equation since time immemorial. Banning publishing platforms for the sins of a few sporadic users would be like banning sidewalks because muggers rob people on sidewalks.
In terms of finding workable solutions, the answer is not to ban the publishing platforms but rather to make our law enforcement systems mutually intelligible. Such an approach will protect honest citizens and not erode civil liberties while also rendering law enforcement a more credible deterrent to violent online threats.
Currently, it difficult for U.S. companies to share stored communications with law enforcement agencies overseas, because they are prohibited from doing so by federal data privacy law. Only in emergency situations can disclosure be made. In non-emergency scenarios, foreign law enforcement agencies seeking U.S.-based user data must go through a months-long Mutual Legal Assistance Treaty, or MLAT, procedure to ensure overseas requests comply with U.S. constitutional requirements around due process and freedom of speech.
Because the U.S.’ free speech rules are more pro-speaker than most if not all countries on the planet, overseas governments – particularly New Zealand, Australia, Britain, and Canada – are the loudest in calling for censorship of American-run web businesses. These calls for censorship strike me as a transparent attempt to do an end-run around U.S. privacy and free speech protections to exert an influence over the content and tone of the Internet, a global system that no country can, on its own, control.
How, then, do we empower law enforcement globally to combat online threats without using tech companies as de-facto content censors? The answer is not to take people’s rights to speak away from them. Rather, it’s to give people more free speech rights in places like Australia and New Zealand. Doing so would alleviate U.S. political concerns that, e.g., are likely to prevent the entry into an executive agreement for data sharing with third countries under the CLOUD Act, such as the proposed agreement with the UK, as the CLOUD Act requires data sharing arrangements between countries to “afford [as] robust substantive and procedural protections for privacy and civil liberties in light of the data collection and activities of the foreign government” as there would be domestically in the United States before .
In my professional opinion as a lawyer admitted in both England and the United States, the Commonwealth countries and the UK, which follow the UK’s constitutional pattern of constitutional monarchy and not the U.S.’ constitutional pattern of etching Enlightenment values in stone, do not adequately provide those protections.
By raising the baseline for free speech rights overseas, foreign governments would make it considerably easier to obtain information from U.S. companies to pursue the worst offenders. What they would give up, in exchange, is the ability to prosecute those who espouse uncomfortable but non-threatening wrongthink at home that is legal in the United States, such as the speech of Catholic journalist Caroline Farrow who was placed under investigation by the British police last week.
Entrusting our citizens with liberty, and making them face the consequences of violating that trust, is consistent with the English-speaking world’s most cherished political traditions. Granting Commonwealth citizens more rights is the prerequisite to enacting effective data sharing arrangements with the United States that will not face objections from U.S. companies and will help to address modern online threats. Taking people’s rights away will result in further incompatibility between the U.S.’ and other countries’ legal systems, and deny law enforcement the ability to mount effective and rapid responses to cross-border criminality.
Such a solution doesn’t enjoy the best “woke” optics. But it benefits from the fact that it might actually work.
In so doing, Facebook has just created a massive security hole which exposes every single one of its users to life-alteringly shitty hacks. I’m frankly astonished nobody internally at that company thought about this before pushing this feature.
“What’s the issue?” I hear you ask. The issue here is that your average workaday user who is even a little security minded will not only use their cell phone to do two-factor authentication for their Facebook login, but will also use the same cell phone for every other two-factor login or password recovery system they have, including, for example, their e-mail account or their bank. This is not an intelligent approach to security, as using cell phones for two-factor authentication is, to put it mildly, not even remotely secure.
“How so?” You inquire. Well, the answer is because cell phone companies are run by idiots when it comes to security, so even if you leave specific instructions with your provider to not port your SIM without a PIN and photo ID, smooth-talking criminals can still convince telco employees to do it anyway, with the result that the crook obtains control of your phone number – and can receive any communications sent to it.
This is not a theoretical problem. Cast your mind back to mid-2017, coming off the back of the Bitcoin boom. One day, I get a really weird Twitter message from my friend @twobitidiot, aka Ryan Selkis, asking me if I can lend him some Bitcoin.
Now, as Ryan knows, I am probably the filthiest nocoiner – i.e. non-Bitcoin investor – in existence, in large part because (a) when I got into crypto I was poor and young and (b) 100% behind permissioned blockchain implementations, which the startup I co-founded invented. Investing in shitcoins would have been uncouth, a betrayal to my most deeply-held values and firm belief that global, systemically-important financial institutions love us and want us to prosper.
I was naturally suspicious of his inquiry. I had good reason to be:
This story was repeated over and over again last year. People got their phone numbers ported. The hackers logged in to all of their accounts. The hackers took all of their stuff. Lather, rinse, repeat.
Nobody has really gotten to the bottom of how these phone numbers were ported with such laser-like efficiency. Personally, I think Facebook’s service played a part. At the time, I remember that I and others were getting bombarded with friend requests from slick-looking fake CEOs with good hair claiming to helm fake startups in SE Asia. As a general rule, I don’t add people on Facebook who I haven’t met. Other people do, and a slick CEO of an edgy tech startup is a great person to make friends with, especially for folks in crypto looking to expand their networks. As these friend requests rolled in, they began to look increasingly credible as more and more crypto people I know appeared to be “friends” with these accounts.
Meaning that if crypto people had posted their cell phone numbers as “friends-only” or “friends of friends” on their accounts, the fraudsters had their numbers, too, and could start creeping their way towards the bit/shitcoin hoards these people were thought to hold on crypto exchanges and the like. This is some serious business.
Which brings us to the problem of Facebook making cell phone numbers searchable by default, even to a user’s friends only or “friends of friends,” even when the user wants to keep their phone number private (the “only me” setting). (Edit: the cell phone lookup is set to be shared with “everyone” by default, which is crazy; not that the most restrictive, friends-only, search function is protective enough, since fraudsters can and do find their way onto “friend” lists.)
Due to this, to be blunt, Facebook’s new search feature will allow fraudsters to use Facebook to verify the identities of cell phone subscribers, even where Facebook users have locked down their cell phone numbers on their profiles to avoid this very outcome. In permitting anyone to search cell phone numbers, Facebook has compromised the security of every individual user of its service in the name of convenience.
All someone needs to do, conceivably, to exploit this new “feature” from Facebook is to punch in random cell phone numbers until they hit paydirt and discover a corresponding identity. If the user isn’t particularly security-minded, they’ll have birthdates and addresses publicly viewable, too. After the target is identified, the hacker simply calls up the user’s cell service provider, and social engineers a SIM port. Boom. All SMS-based 2FA that person used with that number, on any service, is now compromised. Including the 2FA for the user’s Facebook account.
There are a couple of solutions a Facebook user can adopt, in the meantime, to help ameliorate this issue. One option is to remove your phone number and not use SMS 2FA, or switch to a service like Google Voice that is not susceptible to social engineering. Another is lock down the settings to the extent you can (searchable to friends-only) and hope that (a) your friends don’t get hacked and (b) that you haven’t friended anyone accidentally who is a hacker or a fake, which – at least for some of my buddies in crypto – is a day late and a dollar short.
What these solutions share is that most of Facebook’s userbase is blissfully unaware of the risks of SMS-based 2FA, so they won’t take these measures or won’t implement them effectively.
I’m pretty sure I’m not wrong about this, but if I am, I’ll be happy to discuss it on Dissenter. It strikes me that the engineering boffins over at FB are – not being cryptogeeks – almost totally blind to the risk they’ve just created for hundreds of millions of users as a result of SIM porting. It also strikes me that the best way to address that risk is to kill the feature.
After they do, we all need to seriously re-evaluate our relationship with any interactive service that asks us for our mobile phone numbers before we can use it, if a company of Facebook’s size can make an error so elementary that a lawyer who can barely program “hello world!” in Python picked up on it, but all their engineers and security professionals didn’t.