AI Breaks the World, Crypto Fixes It, Part VI: Hacking the 2024 Election With Biden Deepfakes (A Thought Experiment)

This is the sixth post in a six-part series, the five previous ones being:

  1. AI Breaks the World, Crypto Fixes It;
  2. AI Breaks The World, Crypto Fixes it, Part II:the “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”; 
  3. AI Breaks the World, Crypto Fixes It, Part III: How Crypto Can Solve the Joe Biden AI Deepfake Problem;
  4. AI Breaks the World, Crypto Fixes It, Part IV: The Great Zoom Robbery; and
  5. AI Breaks the World, Crypto Fixes It, Part V: OnlyFakes Requires New Forms of “Proof of Human.”

I had a very pleasant chat with one of my counterparts in the antifraud division of a bank this morning, who shall for present purposes remain nameless.

It was a very productive conversation. We agreed on a great many things. First, we agreed that biometrics is an authentication solution for devices but not a great authentication solution for one’s identity across multiple platforms. Second, we agreed that there is not a “magic bullet” solution for identity and that whenever a new gimmick other than strong cryptography was introduced into the mix – cell phone based 2FA or biometrics – these could be circumvented by determined actors without a huge amount of difficulty.

We also agreed that given the pace at which AI is advancing, the usual ways we verify identity – hearing someone’s voice, for example, or relying on scanned copies of their identity documents – are going the way of the dinosaur, and that the only thing will fix it is strong cryptography.

Most importantly, we agreed that, given the pace of A.I. development and the adoption of these tools by open-source actors, that although there is a dim awareness of an “identity problem” brewing among the highest echelons of the American state, policymakers and business leaders are not meeting the problem with the urgency it demands.

We know, for example, that the White House understands both the problem and at least partially understands what is needed to start providing a solution. See e.g. by this quote they supplied to the WSJ about what I jokingly referred to as a “Proof-of-Biden” application for “watermarking content” cryptographically…

…although the fact that they’re using the term “watermark” to describe the solution betrays a degree of ignorance of the true scale of the problem. If the only issue our society faced with AI were that we need to determine which “Dark Brandon” meme emanated from POTUS vs those which emanated from people making fun of POTUS, then yes, a cryptographic watermark with a “Vote for Biden” browser extension might do the trick.

The problem with AI, however, is not that people will post unauthorized Internet memes. It is that AI drives the cost of creating a personalized message capable of being delivered in Joe Biden’s voice, to every voter in the United States, down to near-zero. A foreign threat actor could conceivably hand-deliver campaign messaging, that sounds like the President, to every single household in the United States with zero effort, messaging which voters could not distinguish from the real thing.

Hacking the 2024 election with AI: how it might be done

It is the Friday before election day, 2024. Donald J. Trump is ahead by four points nationally and in key battleground states. (This is a thought experiment, not an expression of hope or a prediction, so don’t shoot the messenger.)

In China, PLA Unit 61398 has spent the previous 8 months cataloguing Americans’ political preferences and pairing those preferences with phone numbers. With the assistance of a secret large language model developed by the Chinese government, the PLA furiously devises propaganda which it plans to deliver in Joe Biden’s voice to every American household which paints the President in a maximally negative light, tailored to that individual voter, following a successful penetration test earlier in the year where the PLA called thousands of New Hampshire voters with a simple robocall (this actually happened, although it is not presently known who the perpetrators were).

That evening, at dinnertime, the phone rings. Every voter in America hears Joe Biden’s voice deliver a message which is tailored to terrify that individual voter and paint the President in the least flattering possible light.

Over the weekend, the campaigns duel with one another over social media. Biden’s campaign blames Donald Trump for the calls; Donald Trump’s campaign says that Biden was responsible and, in any case, Biden is someone they should fear. Campaign surrogates on social media accuse the “Deep State” of election interference. Voters, who, not being technologists, have encountered AI deepfakery for the first time, go to the polls three days later, afraid and confused.

Regardless of who wins, America would lose. The losing side would accuse the winning side of election interference; both sides would understand that the election had actually been interfered with by somebody, although they would not know whom had so interfered. On balance, the trust in American democracy would be eroded significantly, more so than it has been already.

It doesn’t really matter who wins the election from a foreign adversary’s perspective. The confusion and the acrimony is the win. It is this threat, not the obviously fake social media post from an openly partisan influencer on Twitter, that we need to be preparing for, that mere watermarking is insufficient to address, and that only one technology – cryptocurrency-as-distributed-PKI – can be integrated with our communications systems with sufficient speed that we could ensure the integrity of all communications across our society.

In the AI age, we are hugely vulnerable to foreigners and criminals using these tools to impersonate us. At every level, with every transaction.

Integrating cryptocurrency PKI with our communications systems to defeat malicious AI would be the greatest national security win since the atom bomb.

We should start a new Manhattan Project to fortify our communications from these threats.