XRP: the more things change, the more they stay the same

This week, I’ve written a guest post over at The Block in the Crypto Caselaw Weekly column:

Entrepreneurs in the crypto space, or in any space for that matter, are risk-takers and sometimes can be quite cavalier risk-takers. The importance for the legal practitioner advising entrepreneurs on the cutting edge is to be sufficiently confident with one’s handle on the technology or the marketplace that one can ensure that whatever folie à plusieurs has seized the imagination of the market does not also have a firm hold on you.

Read the whole thing.

Also read my blog post from last year, For the last time, Ripple Labs created XRP while you’re at it.

What Section 230 of the Communications Decency Act actually says

I’m not your lawyer and the below is not legal advice. Note disclaimer.

In the wake of the El Paso and Dayton shootings and the subsequent deplatforming of the 8Chan imageboard, Section 230 of the Communications Decency Act (47 U.S.C. § 230) has become one of the most-discussed, most-misinterpreted, and in terms of its practical effects on day to day American life, most poorly-understood legislative provisions in American public discourse.

If ever you hear a conservative politician talking about Twitter as a “publisher and not a platform,” such as Ted Cruz or Josh Hawley, you’re listening to a viewpoint that doesn’t reflect Section 230 at all. If in the New York Times you read that the importance of Section 230 is that “sites can moderate content — set their own rules for what is and what is not allowed — without being liable for everything posted by visitors,” and leaves it at that, you’re reading a viewpoint that understands what Section 230 says, but doesn’t understand what it does.

If you read still other commentary, such as this blog post by Twitter user Ben Thompson, who writes:

Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.

…which leads Mr. Thompson to the conclusion that:

This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.

…you may safely conclude that what you are reading is flat wrong. This viewpoint belongs to someone who wishes to make a trendy or clever point, having just read the Wikipedia article on the subject. In fact, Section 230 does shield platforms from the responsibility to moderate content which is legal in the United States. It was expressly intended to do so.

As with many things, the problem with Section 230 as an object of public discussion – as with gun control – is that the vast majority of people on either “side” of the discussion have absolutely no idea what they’re talking about. Most people clearly haven’t bothered to read it. Of those who have, a minority have a legal education. Most people have never worked with it in a professional setting, let alone invoked it on behalf of a client.

I have. I understand that Section 230 is one of the most powerful pro-freedom, pro-free markets, pro-American, anti-government overreach laws in existence. If Section 230 is neutered, American online life will change beyond recognition.

Section 230 protects American companies, and by extension all American citizens who benefit from the services those American companies provide, by conferring broad immunity from frivolous lawsuits and government interference.

It does this simply, deliberately, and effectively. It’s not rocket science, and we’re going to walk through what the key provisions say and how they work, line by line, below.

What Section 230 of the Communications Decency Act actually says

1. 230(c)(1): “Platforms” are more or less absolutely immune from liability arising out of user-generated content…

Contrary to what you read in the news, Section 230(c)(1) has absolutely nothing to do with content moderation. It has nothing to do with “platforms acting as publishers.” It has to do with how we treat user-generated content on a “platform” (properly, an “interactive computer service”).

It reads as follows:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

To understand what this means we need to turn to the definitions section in Section 230(c)(f), which says

  • interactive computer service” means “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.”
  • In other words, for most people, a web app or other kind of internet communications platform.
  • information content provider” means “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”
  • In other words, a user who generates content.

Thus, for most people’s everyday browsing experience, in plain English this section means:

Platforms and users are not liable for content created by someone else. 

This makes a lot of sense, if you think about it. “Not my circus, not my monkeys” is a bedrock principle of liability in the English common law tradition. Section 230 extends this principle to internet-based communications.

The law is simple, and clear. 

…and the politicians are deliberately screwing up their explanations of it

That has to be it, because I can’t believe that guys as clever as Ted Cruz or Josh Hawley, both attorneys, both really good attorneys, are that misinformed. The idea that, per Senator Hawley,

With Section 230, tech companies get a sweetheart deal that no other industry enjoys: complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship.

…doesn’t appear anywhere in Section 230(c)(1). Or anywhere in the entire Communications Decency Act, for that matter. This is because the Communications Decency Act was designed to promote the development of the internet “unfettered from Federal or State regulation,” (see s. 230(b) – more on this below) and was not a qualified, transactional kind of law where tech companies received an immunity in exchange for providing unbiased political forums.

It is a testament to how truly awful politics is, that politicians have managed to so thoroughly confuse the public about a legal command which is only slightly more complicated than a stop sign. Apart from the fact that an obligation of impartiality is not mentioned anywhere in Section 230, Section 230(c)(1) deals with allocation of liability for statements made on the Internet at a specific moment: at the moment they are made. 230(c)(1) focuses on a point in time where it is impossible for content moderation to have occurred. 

Even if a given platform’s moderation were openly biased against one particular viewpoint, even if the entire world used that platform, even if that platform advertised itself as “the world’s public square for free speech,” and even if that claim weren’t true, all of those questions are conditions subsequent that depend on the statement having been made in the first place. Whatever platforms might do or refrain from doing after that point, when moderating statements that have already been made, is, from a public policy standpoint, irrelevant to the question of how liability should be apportioned at the moment of genesis.

Section 230(c)(1) is therefore engaged any time any user creates any content on any platform that is subject to the jurisdiction of the United States. It sets out the simple principle that user generated content is the problem of the user who created it, and no one else’s

There are, of course, certain very limited carveouts specific to federal criminal law e.g. illegal pornography or sex trafficking (and note, the sex trafficking provision, known as FOSTA/SESTA, is currently being very credibly challenged on First Amendment grounds in the courts) which do create an obligation to remove certain types of unlawful speech after it has been posted. But this isn’t moderation as much as it is prohibition, and these carve-outs aren’t what the politicians are talking about when they talk about Section 230 reform.

2. 230(c)(2): “Platforms” are immune with regard to good-faith moderation calls

Section 230(c)(2) is an entirely separate provision. It has no effect whatsoever on the rights granted by 230(c)(1). Section 230(c)(2) confers immunity for moderation activity for platforms that choose to moderate. It reads as follows:

No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [sub-]paragraph ([A]).

In plain English:

If a web app moderates any content off of their platform, and you sue them for it, you’re going to lose.

This statutory provision isn’t actually necessary for platforms to cover their asses vis-a-vis their users. This is because, each and every day, users consent to tech platforms’ contractual terms of service, and in so doing the users contract out of any remedies they may have for discretionary content moderation activity undertaken by those platforms. In plain English, when you use a platform like Twitter and Facebook, you agree to let them moderate your content. With this in mind, why do we need Section 230(c)(2) in the first place? Why not just let the market sort it out and let platforms compete on terms?

Well, this provision, and indeed all of Section 230, is expressed to supersede any contrary state law (see s. 230(e)(3)). This section therefore is best understood as being about preventing state regulators from creating state-level rules that could impose liability for content moderation, e.g. if Texas created a cause of action for moderating climate change skepticism and California created a cause of action for moderating climate change activism.

That such a provision can also be raised as an affirmative defense to any action challenging good-faith content moderation is a bonus, a nice-to-have. But it isn’t really why it’s there.

Accordingly, it would be less than correct to say that this rule is about encouraging companies to moderate “hate speech” (a term that was barely used in public discourse in the early 1990s) or any other type of objectionable content. Contract law could have handled (and does handle) that just fine. It would be more correct to say that this rule acts as a backstop that prohibits local interference with an interstate system.

More on that below.

3. The “will of Congress”

When we read something from Mr. Thompson and numerous other commentators that

it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal

…that’s just not true. The statute says nothing of the sort.

The Federal Government is constitutionally barred by the First Amendment from regulating protected speech (See: “Congress shall make no law… abridging the freedom of speech,” U.S. Const., Amend. I). Accordingly, the regulation of “hate speech” and other forms of highly objectionable content, such as that complained of by the Times or Mr. Thompson, has been outside of province of Congress for 228 years. It cannot have been intended to be regulated by the Communications Decency Act or any other statute. 

The intent of Congress is that companies who operate legal businesses are immune from liability arising from user-generated content, subject to the few aforementioned carve-outs under federal law. Period. They’re also immune from the (civil) consequences of good-faith moderation of objectionable content. Period.

Section 230 is a one-way street: “these are the rules, because the unrestricted development of the Internet is good for America.” Congress goes out of its way to tell us that this is what it intended. To do this we need to navigate away from Wikipedia, look up the statute and scroll up one sub-section to the (legally non-binding) preamble, Section 230(b), in which Congress says:

It is the policy of the United States (1) to promote the continued development of the Internet and other interactive computer services and other interactive media; (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation; [and] (3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other Interactive computer services

In plain English:

  1. Internet good.
  2. Free markets good. Government regulation bad.
  3. States creating local rules that result in American domestic “splinternet” very bad.
  4. Users should be maximally in control of what they see.

Note that “providing an unbiased forum free of political censorship” is not among the United States’ stated reasons for enacting Section 230.

4. Proper avenues for content enforcement

In the wake of a tragedy like El Paso or Christchurch there is a temptation to recommend regulating online speech. I wrote about this at length in a blog post I published after the Christchurch shooting, Open Access Publishing Platforms and Unlawful Threats. Summarizing that post in a line, calls to ban online speech in response to a tragedy are feel-good solutions that don’t actually make anyone safer.

First, banning a user from a website doesn’t ban the person from existence. They’re still very much there, and there are many darknet avenues they can use to communicate with like-minded people.

Second is the lesson that I was taught as a four-year-old, but which most journalists these days seem to have a hard time grasping:

Sticks and stones will break my bones, but words will never hurt me.

A post on 8Chan isn’t going to kill anyone. It can’t even scratch anyone. It can’t do anything to anyone else but offend them. What it does, however, every single time, is identify the speaker. And if that speech is threatening and/or illegal, it gives law enforcement probable cause to serve a platform such as 8Chan with a search warrant or emergency disclosure request. Just as the publication of Unabomber Ted Kaczynzki’s manifesto in the New York Times led to his identification by his own brother thanks to the presence of telltale prose, a user on 8Chan will generate all manner of personally identifying data, including e.g. an IP address, which law enforcement can use to try to intervene in a tragedy. If the most extreme users migrate to ZeroNet, as indeed some have, this ability all but disappears.

Saying “we should ban 8Chan” is the digital equivalent of burying your head in the sand. What will happen, will happen. If you ban the site, you just will remain blissfully unaware of it until after the fact.

5.  Why do we want Section 230?

I’ll write this story in full another day. Suffice it to say, perfectly legal user generated content can be a pain in the ass, and people and governments who don’t like and complain about perfectly legal user generated content are an even bigger pain in the ass, for any internet business.

Section 230 makes it very easy for a business to focus on its business and tell the hordes of basement dwelling complainers from around the world to get lost. Countries like the UK, France, and Australia empower the complainers by imposing liability on service providers for failing to remove content which is defamatory, or which the governments, as informed by “concerned citizens,” of these countries find objectionable.

The American answer to these issues is (in the case of defamation) to file a lawsuit against the user who generated the content, or (in the case of objectionable but legal content) do nothing at all, or leave the discretion to moderate in the hands of the company providing the service. If we ditch Section 230, all that changes. The American answer, and the First Amendment, will cease to be relevant. Litigants and foreign actors will have considerably greater power to use lawfare to shut down American venues that host speech that these private or foreign persons disagree with, including political speech.

Foreign governments adopt approaches to speech regulation which are subjective and, if enacted in the U.S., would be both unconstitutional in their aim and unconstitutionally vague.

The U.S. approach, and the Section 230 shield that creates it, is the last defense for freedom online and should not be amended. Make no mistake, the intent of Congress in passing this law was to protect online business from vexatious litigation and government interference of all kinds, foreign and domestic.

We already have myriad legal solutions for truly egregious conduct online. If speech on an open-access publishing platform is criminal, law enforcement needs partners who are able to respond to search warrants, and the Stored Communications Act gives LEAs the legal framework in which to obtain them. If speech is not criminal but privately damaging, if it’s really that damaging, there should be a private cause of action against the user who created the content and that action should be brought against that user, and that user alone.

If it’s neither criminal nor tortious, really, who cares? There are 7+ billion people on the planet. We can’t silence everyone who disagrees with us. Nor should we want to try.

Screen Shot 2019-08-08 at 11.21.31 AM.png

Thoughts on Facebook’s Libra Coin

Permissioned blockchains: where it all began

Facebook’s Libra coin is to be built on a thing called a “permissioned blockchain.”

Permissioned blockchains – i.e., blockchains that are designed not to be decentralized cryptocurrencies but rather are designed to be databases with approved validators, administrators and granular write permissions  – have long been some of the more misunderstood critters in blockchain-land. They have been derided as slow databases, cheap knock-offs of Bitcoin or easy innovation wins for lazy bank tech people with budgets to spend.

The first permissioned chain prototype, however, had nothing to do with money. I should know; my team built it.** It was designed to be a distributed decision-making platform, with a UI like Reddit’s. We called this product Eris. And at the time we described it as follows:

Current free-to-use internet services, from search to e-mail to social networking, are dependent on advertising revenue to fund their operations. As a result, companies offering these services must – to paraphrase Satoshi Nakamoto – ‘hassle their users for considerably more information than they would otherwise need.’ This necessity has skewed the internet toward a more centralized infrastructure and usability system than it was intended….

Where Bitcoin was designed to solve this problem in relation to point-of-sale and banking transactions, [we are] working on solving this issue for internet-based communications, social networking and community governance — bearing in mind that for free internet services such as e-mail, social networking, search and “open data,” intrusion into users’ private lives and the accumulation and centralisation of vast quantities of personal information in centralised silos is not some minor and ancillary nuisance — this is a design imperative for everything that [we are] engaged in. As such, Eris is not another web service; Eris is significantly different because it has been designed and implemented specifically to not use servers.

Eris.png
The Prototype

Put another way, back in 2014, the “permissioned blockchain” was conceived as a tool of liberation from centralized architecture that would allow disparate groups of people in different parts of the world to spin up distributed cryptosystems for discrete processes, arrive at consensus as to the outcome of those processes, and have a cryptographically verifiable record of the means though which those outcomes were reached.

PRODoug.png
Andreas Olofsson’s People’s Republic of Doug, another early prototype of blockchain permissioning in a non-monetary usecase

Building a big, clunky, global, one-chain-to-rule-them-all permissioned blockchain system is dumb. Such a system will be unscalable and doesn’t accommodate the kind of granular read permissions that you need to implement when you’re trying to maintain even a semblance of user privacy. Private chains obviate the need to use third party services like AWS to run a program for users in different locations. With a chain, a group of users can run a program simultaneously and easily, and can verify proper execution, on their own consumer-grade hardware even if they’re far apart. But in exchange for not using AWS, the people running the program among themselves have to be comfortable showing each other all of their cards.

Put another way, in exchange for getting privacy vis a vis the outside world, you’re required to have radical transparency among your counterparties. If that trade off doesn’t make sense, chances are it wasn’t (and isn’t) a good use case for a permissioned blockchain.

After we released the code for the first permissioned blockchain client in late 2014, I spent the better part of Q4 2014/Q12015 (the pre-Blythe Masters era) pounding pavement in London – literally traipsing around on foot from one bank to the next  – to explain what, exactly, this newfangled blockchain thing was and why it was going to be relevant for financial services business going forward. We managed to pick up one design and build contract here, another one there, but at the time scalable business models in enterprise chains were hard to come by and, to a certain extent, they still are.

Enter Libra

That was a long time and another career ago. Now I practice law in the countryside and occupy my free time shooing marmots away from my plants.

When I heard Facebook was going to launch its own blockchain product, I was initially really excited. It had the potential to be good validation for the permissioned chain thesis. The use of a permissioned chain would permit, for example, seamless value transfer between accounts on Facebook’s various social media properties, such as Instagram and WhatsApp, which would settle out to USD or local currency. Privacy wouldn’t be an issue as Facebook would run all the nodes. Other companies would presumably follow suit and use similar systems to pass data between their own different consumer-facing applications.

But this is not what Facebook built.

I know blockchains well, and there is no reason whatsoever to use a blockchain for Libra. Furthermore, the promises Facebook has made in relation to Libra make no sense if a blockchain is used.

For example, Facebook promises that it will “allow users to hold one or more addresses not linked to their real-world identity” but will also be regulatory compliant. Facebook also tells us Libra will be “a single data structure” that allows validators to “read any data from any time and verify the integrity of that data using a unified framework” but also, in public statements, that the company cares about privacy and that features to “enhance privacy” will be considered over time.

This is nonsense. Design choices have consequences, and when a specification gives, it also takes away. If you permit anonymous transactions you will fail KYC/AML/CFT requirements. If you hold all data forever and allow validator nodes to read it, user privacy simply does not exist.

Keeping in mind the need for global regulatory compliance, an immutable public ledger and data privacy of any sort are incompatible. On the blockchain, every validator has read permissions, ipso facto. Libra is proposing to have 100 corporate validators. With distribution that broad, it might as well publish the transactions in the New York Times.

Not even the most silver-tongued Frenchman can smooth over a pile of bullshit this large.

So here’s what I think

If Facebook had built an app that just allowed in-app, user friendly payments in USD, I could have gotten behind that.

What has been proposed instead is a combination of a licensed version of Liberty Reserve with an ETF, which furthermore proposes to share transactional data with a veritable rogues’ gallery of privacy-abusing tech companies and the VC firms that funded their rise. The marketing plan uses the word “blockchain” to frighten away legislative scrutiny, under circumstances where the scheme clearly has as its aim the destruction of government money and enrichment of Facebook. It is clear enough from David Marcus’ comments today what Facebook wants to do: the company wants to own payments, from the moment someone is paid, to the payments for the advertisements that person is served, to tracking where the payments go and how effective the advertising was, and right back through where the merchant pays its employees and the process begins anew.

Facebook has assembled a stunning array of partners, from Andreessen Horowitz to Uber, whose mantra for the last two decades has been venture-funded loss-leading expansion to establish a monopoly, followed by network effect-reinforced rent extraction once the monopoly position has been secured.

Libra
The Circle of Trust

Facebook’s problem, of course, is that it cannot grow any larger – its products are used by more than 1/3rd of all sentient life – so it must grow deeper. See, for example, internet.org, a scheme whereby Facebook seeks to become the only means of access to telecommunications available to the global lumpenproletariat. Among those societies which can afford telecommunications, the growth plan is Libra: turn Facebook into the only means of access to commerce.

Just as Uber eviscerated municipal taxi guilds with little regard for regulations and livelihoods, so Facebook will eviscerate small countries, community banks, credit card issuers and small payment processors, in collaboration with industry giants, until it – through its Libra cabal – has the power to determine the terms on which ordinary Americans, and indeed anyone on Earth, can buy goods and services, if our governments let Facebook get away with it.

This will be no ordinary monopoly; it will be neo-feudalism. Presumably the plan is for goods and services sold by the hundred-odd eventual Libra members to be heavily discounted to incentivize people to toss their Amex cards and close their bank accounts. Libra will be hooked  into Stripe, Uber, PayPal, and other companies run by the same bunch of Allbirds-wearing hipsters in a 50-square-mile area of California, with the acquiescence of card giants like Mastercard and Visa, all with the aim of making Libra more attractive than money.

Once money is dead, then the trap will snap shut. Look up the term “Embrace, extend, and extinguish.” This is an old West Coast tech tactic. They’re doing it here just as they have before.

And it won’t stop with dominating commerce. The Valley has long shown a willingness to deplatform businesses, people and ideas of whom or which they disapprove. See Milo Yiannopoulos (banned by Facebook, Twitter, and Coinbase – the latter of which is also a Libra consortium member), Laura Loomer (banned by Facebook and Uber), Alex Jones (banned by everyone), or Linsday Shepherd (banned by Twitter).

The cherry on top – an insult to our intelligence – is that Facebook and its for-profit commercial partners claim that this is a non-commercial public good to bank the unbanked. They point us to the “Libra Association,” notionally a non-profit in Switzerland despite the fact that operating an income-generative money transmission system is by any reasonable measure a for-profit enterprise.

If Facebook raised an army, this would be only slightly more hostile to the people of the United States than what is currently proposed. Big Tech doesn’t share American values and doesn’t care about American users. It doesn’t care about the unbanked. It cares about money. It cares about building defensive moats, i.e., monopolies. And Libra – the tech industry monopolization of global finance – is a phenomenal way to get both free money (the token represents, after all, an interest-free loan from Libra’s users) and a very deep, wide moat, not just for Facebook, but also for every other major category leader/tech monopolist on the planet.

I would have a hard time arguing with anyone who suggested that the Libra scheme, if operational, could possibly meet all of the criteria for a cartel. As proposed, it should not be permitted to exist. Of course it will be, just as all manner of funky commercial activity becomes passable long as the word “blockchain” is included.

In closing

There is no need for a permissioned blockchain here. Facebook should back off the cypherpunk act and just use PostgreSQL. Permissioned blockchains, like all distributed systems, crypto or otherwise, are best used to circumvent Big Tech.

I am hopeful for the future. Today we see seeds of rebellion across the Internet. New payment processors. New social networks. Bitcoin. Facebook is infuriating the people who would otherwise be power users in the name of profits. Their arrogance inspires competitors. I hope they keep doing it. And I predict Libra – the mother of all corporate overreaches – will be the company’s Stalingrad.

I grow increasingly tired of hearing what Big Tech thinks the world should look like. I trust millions of others are starting to get sick of them too. I embrace any decentralized solutions being built immediately, as soon as they cross my desk.

So should you.

 

 

**Postscript:

A little-known fact is that, just as Satoshi Nakamoto is the mystery-shrouded father of Bitcoin, the father of permissioned blockchains is a quasi-mythical character known only as “Marmotoshi Nakaburrow.” After extensive investigation I have narrowed down his identity to two possibilities:

  • The first possibility is that he is a groundhog named “Doug the Smart Contract Marmot” who lives in Preston Byrne’s back yard.
  • The second is that he is not one man but a group of people. The title “father of the permissioned blockchain” (sorry ladies) can probably be divided up evenly between Monax’s Dr. Tyler Jackson and Casey Kuhlman, Cosmos’ Ethan Buchman and Sweden’s Andreas Olofsson. Preston wrote the copy and signed the checks, so I suppose he played a role too.

Marmotoshi’s identity must remain secret, so the question of whether this blog post is written by Preston Byrne or Doug the Smart Contract Marmot will go unanswered.

All you need to know is that this post was written by Marmotoshi himself.

On the looming Bitcoin bubble

I haven’t made any predictions for awhile (seeing as my Bear Case for Crypto is playing out more or less exactly as described), but I will sound a warning today.

Dramatic run-ups in the price of Bitcoin strangely seem to coincide with large exchanges having banking, withdrawal, and possibly solvency problems. This was the case with, e.g., Mt. Gox in 2013, and some have argued was also the case with long-suffering crypto exchange Bitfinex in 2017.

Gox is an old story, from Bitcoin’s ancient history, where a mere $460 million was at stake. Fortunately, the scribes at Wired etched the tale into granite, a copy of which may now be read online here.

For those of you who are really new around here, the 2017 bull run coincided, almost to the day, with two events at the beginning of April, 2017:

  1. Bitfinex getting cut off from the U.S. banking system by Wells Fargo.
  2. The Tether shadow dollar hawala system, which was nominally independent but appears to have been managed by the same individuals who run Bitfinex, kicking into overdrive and beginning a ten-month run in which Tether would print several billion dollars’ worth of Tether tokens.
Screen Shot 2019-05-03 at 11.23.39 AM.png
Source: CoinDesk
Screen Shot 2019-05-03 at 11.23.00 AM.png
Bitcoin’s bull run, starting in April 2017. Source: coinmarketcap.com
Screen Shot 2019-05-03 at 11.37.36 AM.png
Tether was created in 2015, but began aggressively issuing its new USDT “stablecoin” tokens in April 2017 and did so throughout the 2017 bull run.

Prominent critics have delved into possible explanations for this coincidence in more detail than I care to expand on here, save to say that I do not dismiss those explanations out of hand.

Today, Bitfinex – the largest cryptocurrency exchange in the world – appears to be, allegedly, in a spot of trouble once again. And the price of Bitcoin is rising quickly once again.

I will not dwell at length about Bitfinex’s current drama, save to say that individuals who have allegedly done business with Bitfinex are under federal indictment, assets managed by those individuals have been seized, and Bitfinex itself is known to be under investigation for alleged fraud by the Attorney General of New York.

If you’re a trader or investor, tread carefully. It is possible that the current price of a Bitcoin bears some relation to, and is uniquely vulnerable to, regulatory developments.

If the looming bubble should spin wildly out of control, here’s a timely and healthy reminder to investors to keep their wits about them, don’t propose that a new paradigm is upon us, and be mindful of gravity.