The “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”

I was on the Tweets Xeets this morning when I stumbled across this video of our stunning, brave, and illustrious Vice-President calling out “misinformation” enabled by artificial intelligence:

This comes on the heels of an AI-based executive order essentially requiring big compute clusters to register with the government, which my friend and colleague Matthew Richardson wrote about on my law firm’s website so I shall not add to it here.

“Misinformation” and “disinformation” are terms thrown around often in political discourse, usually by the political left. In practice, the terms have become roughly equivalent in meaning to the term “propaganda.” Even then, the term “propaganda” has morphed in meaning in recent years; there was implied in previous explanations of the term a pervasiveness which necessarily meant that misinfo, disinfo, and propaganda were the exclusive preserves of the state (see e.g. Jacques Ellul’s Propaganda from 1965, “[p]ropaganda must be continuous and lasting – continuous in that it must not leave any gaps, but must fill the citizen’s whole day and all his days; lasting in that it must function over a long period of time.”)

What is different in the current age from the 1960s is that publishing tools which were then only available to commercial enterprises and the state are now available to anyone. The busiest day this blog ever had, for example, was in 2021 when a blog post I wrote inquiring about the United States’ strategic blindness to the Taliban’s use of WhatsApp got picked up by some RW internet influencers and picked up more than 100,000 uniques in the course of a day. As recently as the 1990s that kind of reach for a political pamphlet would have been ridiculous, unheard of, impossible unless presented front-and-center on the opinion pages of the Gray Lady or, in the case of dissident literature, a truly exceptional case such as that of the Communist Manifesto or “Industrial Society and its Future,” complete outliers from what most people would consider to be normal political discourse. Similarly, Instagram influencers with an iPhone and a wardrobe budget routinely do numbers today that would have put entire ad agencies to shame in the 1960s if they saw with how little we achieve so much. The Internet allows anyone to become a propagandist, and the increasingly decentralized nature of political movements means that propaganda in service of a cause can be incredibly difficult to squash by its enemies. Even in the 2016-2022 period, when practically every major social media site served as an extension of the center-left U.S. security state, right wing thought still managed to break out. That right wing thought is now starting to proliferate in society, I think, can likely be attributed almost entirely to the fact that one of the major social media sites has decided to take its thumb off the scale and let nature run its course.

Generative AI is the “splitting the atom” moment for political propaganda. Images, words, and arguments can be created and posted ad infinitum by tireless and unceasing machines, superhuman shitposters who never tire and feel no shame. The question for us is not whether the machines will be better at shitposting than we are – that question is settled, the machines won. The question is how we determine whose opinions we will listen to and filter out the rest.

The answer, of course, is “proof of human.” The problem we are trying to solve is an ancient one, known to computer scientists simply as Sybil. The solution is not to regulate AI but to improve attribution and impose costs on communication. The answer here is cryptocurrency.

Why cryptocurrency? Easy. Cryptocurrency imposes cryptographically verifiable digital scarcity on a world where content generation is infinite and free. Hashes of images can be signed with digital signatures which revert back to certain identities, like .ETH or .XCH public key addresses, so we can verify that the images we see were posted by entities we trust. We can use blockchains like certificate authorities to filter out unknown entities. We can use crypto, too, as a gateway to block access to our inboxes – imposing financial penalties for people who wish to contact us by metering our e-mail inboxes, for example, or whitelisting addresses of people and services we want to let through. We can create real-world incentives like paying with wallets at shops that prove we are actual, flesh-and-blood individuals who were a certain place at a certain time and build up proof-of-human reputational scores. Introducing a fake image into the stream of commerce under your identity, and getting caught, will be a mark of shame that is reputationally attributable and inescapable. The list goes on.

Imposing these gates, roadblocks, and attributions in a sense not only fixes the AI problem – at least for now and for the foreseeable future, the AI cannot forge a digital signature – but also brings us back to a 1960s-era propaganda environment, with some modifications. Every statement, every image, will be expected to be paired with an identity – pseudonymous or not. State actors won’t be able to fake being grassroots and grassroots actors won’t be able to fake being states. Bot networks can be rendered prohibitively expensive to run.

Once again, rather than asking about the motivations any entity, and rather than discussing definitions of “misinfo” “disinfo” and “propaganda” that rely on meanings of these terms so watered-down as to be nearly meaningless, the only question about these images will once again become: “are they true, false, or misleading?” And the only question about identities online will be: “who is this, really?”

In a world where the amount of computing power – and silicon-based superintelligence running on that computing power – available to the everyday person with a laptop is exponentiating, it might not be the worst thing in the world to lash at least some portion of the Internet down to reality. Cryptocurrency is the technology that can do it, and it can do it without passing a single new regulation on AI or restriction on the freedom of speech – as long as U.S. regulators are willing to get out of the way.


William Allen writes:

Generally, yes, but I think you also need an economic piece (e.g. gating email with a micropayment) which needs a cryptographic state machine with money – i.e. a cryptocurrency – for such a system to be workable at scale with minimal human overhead. A cryptocurrency protocol conveniently bundles together the global state you need to know who is who and what has happened in the past, money, and all the other cryptographic primitives you’d need which are normally usable without blockchains. Without the money piece, you’re going to need to intermediate PayPal into everything, which hasn’t yet happened (so it probably won’t); without the state transition piece, you’ll have difficulty disintermediating a central counterparty as your certificate authority.

Image licensed under the Pixabay License.