The Back of the Envelope (a blog)

AI Breaks the World, Crypto Fixes It, Part VI: Hacking the 2024 Election With Biden Deepfakes (A Thought Experiment)

This is the sixth post in a six-part series, the five previous ones being:

  1. AI Breaks the World, Crypto Fixes It;
  2. AI Breaks The World, Crypto Fixes it, Part II:the “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”; 
  3. AI Breaks the World, Crypto Fixes It, Part III: How Crypto Can Solve the Joe Biden AI Deepfake Problem;
  4. AI Breaks the World, Crypto Fixes It, Part IV: The Great Zoom Robbery; and
  5. AI Breaks the World, Crypto Fixes It, Part V: OnlyFakes Requires New Forms of “Proof of Human.”

I had a very pleasant chat with one of my counterparts in the antifraud division of a bank this morning, who shall for present purposes remain nameless.

It was a very productive conversation. We agreed on a great many things. First, we agreed that biometrics is an authentication solution for devices but not a great authentication solution for one’s identity across multiple platforms. Second, we agreed that there is not a “magic bullet” solution for identity and that whenever a new gimmick other than strong cryptography was introduced into the mix – cell phone based 2FA or biometrics – these could be circumvented by determined actors without a huge amount of difficulty.

We also agreed that given the pace at which AI is advancing, the usual ways we verify identity – hearing someone’s voice, for example, or relying on scanned copies of their identity documents – are going the way of the dinosaur, and that the only thing will fix it is strong cryptography.

Most importantly, we agreed that, given the pace of A.I. development and the adoption of these tools by open-source actors, that although there is a dim awareness of an “identity problem” brewing among the highest echelons of the American state, policymakers and business leaders are not meeting the problem with the urgency it demands.

We know, for example, that the White House understands both the problem and at least partially understands what is needed to start providing a solution. See e.g. by this quote they supplied to the WSJ about what I jokingly referred to as a “Proof-of-Biden” application for “watermarking content” cryptographically…

…although the fact that they’re using the term “watermark” to describe the solution betrays a degree of ignorance of the true scale of the problem. If the only issue our society faced with AI were that we need to determine which “Dark Brandon” meme emanated from POTUS vs those which emanated from people making fun of POTUS, then yes, a cryptographic watermark with a “Vote for Biden” browser extension might do the trick.

The problem with AI, however, is not that people will post unauthorized Internet memes. It is that AI drives the cost of creating a personalized message capable of being delivered in Joe Biden’s voice, to every voter in the United States, down to near-zero. A foreign threat actor could conceivably hand-deliver campaign messaging, that sounds like the President, to every single household in the United States with zero effort, messaging which voters could not distinguish from the real thing.

Hacking the 2024 election with AI: how it might be done

It is the Friday before election day, 2024. Donald J. Trump is ahead by four points nationally and in key battleground states. (This is a thought experiment, not an expression of hope or a prediction, so don’t shoot the messenger.)

In China, PLA Unit 61398 has spent the previous 8 months cataloguing Americans’ political preferences and pairing those preferences with phone numbers. With the assistance of a secret large language model developed by the Chinese government, the PLA furiously devises propaganda which it plans to deliver in Joe Biden’s voice to every American household which paints the President in a maximally negative light, tailored to that individual voter, following a successful penetration test earlier in the year where the PLA called thousands of New Hampshire voters with a simple robocall (this actually happened, although it is not presently known who the perpetrators were).

That evening, at dinnertime, the phone rings. Every voter in America hears Joe Biden’s voice deliver a message which is tailored to terrify that individual voter and paint the President in the least flattering possible light.

Over the weekend, the campaigns duel with one another over social media. Biden’s campaign blames Donald Trump for the calls; Donald Trump’s campaign says that Biden was responsible and, in any case, Biden is someone they should fear. Campaign surrogates on social media accuse the “Deep State” of election interference. Voters, who, not being technologists, have encountered AI deepfakery for the first time, go to the polls three days later, afraid and confused.

Regardless of who wins, America would lose. The losing side would accuse the winning side of election interference; both sides would understand that the election had actually been interfered with by somebody, although they would not know whom had so interfered. On balance, the trust in American democracy would be eroded significantly, more so than it has been already.

It doesn’t really matter who wins the election from a foreign adversary’s perspective. The confusion and the acrimony is the win. It is this threat, not the obviously fake social media post from an openly partisan influencer on Twitter, that we need to be preparing for, that mere watermarking is insufficient to address, and that only one technology – cryptocurrency-as-distributed-PKI – can be integrated with our communications systems with sufficient speed that we could ensure the integrity of all communications across our society.

In the AI age, we are hugely vulnerable to foreigners and criminals using these tools to impersonate us. At every level, with every transaction.

Integrating cryptocurrency PKI with our communications systems to defeat malicious AI would be the greatest national security win since the atom bomb.

We should start a new Manhattan Project to fortify our communications from these threats.

AI Breaks the World, Crypto Fixes it, Part V: OnlyFakes Requires New Forms of “Proof of Human”

I write this post for you from 30,000 feet above France, on my way back to America from another wonderful Satoshi Roundtable event in Dubai. We are about to descend into London so I must keep my remarks short so I can put my laptop away in anticipation of the usual disembarkation scrum one encounters after a long flight.

This is the fifth post in a five-part series, the four previous ones being:

  1. AI Breaks the World, Crypto Fixes It;
  2. AI Breaks The World, Crypto Fixes it, Part II:the “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”; 
  3. AI Breaks the World, Crypto Fixes It, Part III: How Crypto Can Solve the Joe Biden AI Deepfake Problem; and
  4. AI Breaks the World, Crypto Fixes It, Part IV: The Great Zoom Robbery.

We learn today of a website called OnlyFakes, which exists for the sole purpose of producing fake ID cards for accessing cryptocurrency exchanges or other businesses which verify ID remotely in order to confirm a user’s identity for regulatory compliance purposes.

The use-case this will likely, initially address is hacking the KYC function of offshore crypto exchanges. Crypto exchanges are not bricks-and-mortar businesses. As such, when performing KYC checks on their users they require users to present proof of their identity and locality, most often in the form of government-issued identification cards and less often in the form of proof of address like a bank statement or a utility bill.

See, e.g., this fake California ID generated by the app:

Looks a lot like the real thing, and is likely impossible for an exchange to verify.

The problem with conducting KYC verification in this way is not one which can be fixed by video calling, either. As we learned in the last entry in this series which I wrote *checks watch* two days ago, scammers can and will also deepfake live over platforms like Zoom.

As the previous posts in this series have made clear, the problem of faking one’s identity on the web is not one which is fixable with the web. No unsigned communication made over the Internet can be believed anymore. Over the Internet the AI is too good, more human than human, and cannot offer “proof of human” as such. It is not only reasonable to expect that this will soon leak out of the web-world into the real world, it is inevitable that this will happen. An AI which can convincingly fake a driver’s license can also convincingly fake a utility statement, or a bank check, or anything else, and since most utility statements are delivered by e-mail these days, a real statement and a false one will be printed on the same household printer and will likely be indistinguishable.

For now, there are only two things AI can’t fake: actual physical presence and a digital signature using a robust digital signature algorithm like ECDSA. We must combine these things and physical hardware to create a multi-factor, multi-signature proof-of-human.

Our governments must embrace cryptocurrency technology because cryptocurrency, for all its faults, is the best – and only – PKI which can even begin to serve the “proof-of-human” function our societies need in a world turned upside down by AI.

AI Breaks the World, Crypto Fixes It, Part IV: The Great Zoom Robbery

This is a follow up to three earlier posts:

  1. AI Breaks the World, Crypto Fixes It;
  2. AI Breaks The World, Crypto Fixes it, Part II:the “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”; and
  3. AI Breaks the World, Crypto Fixes It, Part III: How Crypto Can Solve the Joe Biden AI Deepfake Problem.

And now for our fourth installment in the series, we encounter a story about an audacious heist in which an AI was used to steal $25 million from a Hong Kong company by faking the entire senior management team on a Zoom call:

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.

I have rammed this point home in my three prior posts, but will do so again: the advent of AI means we cannot trust anything we see online.

See e.g. this (unverified) report where an engineer allegedly used ChatGPT to convince desperate men to buy Soho House memberships to score a date, with an entirely AI-generated “8/10 brunette marketing specialist who went to Penn State.” She would supposedly say “you know, my favorite date is espresso martinis at Soho House” (cost of membership: $2,500-$4,800), the men would buy the memberships and she, not being real, would be a no-show.

It sounds unbelievable, sure. But considering we’ve got a verified report from CNN that a bunch of AI scammers managed to trick a company in Hong Kong into sending them $25 million it’s a lot more believable today than it would have been a year ago. Except, in this case, the alleged fraud was designed to juice the value of call options he claimed to have bought on Soho House Inc.’s shares:

Even if untrue, the prior report would in principle be very easy and nearly free to replicate.

Somewhat more believable is this report, where an engineer apparently used ChatGPT to aggressively filter out dates and even to court the woman he eventually proposed to:

Bots of course, and ChatGPT, are nothing new on dating apps. Also, on Tinder/Bumble/Hinge/whatever, there really isn’t a solution to this problem that cryptography can provide – if you have a single user willing to lie about who they are in order to scale up their own personal communications capacity, then you’re going to have a hard time fixing that with crypto. If you lend your identity to a bot, then you’re deliberately tarnishing the integrity of your identity. The only way you’re going to be able to tell who your counterparty is, if they’re doing that, is by meeting them in person. I’m just glad I got off of dating apps before they turned into a totally useless dystopian hellscape.

The question is how we increase the threshold of verifiability in unencrypted (telephone) and other mostly non-cryptographic (Zoom) communications protocols where there aren’t incentives to cheat in this way and where there are incentives for people to be jealous guardians of their private keys, such as in business. Already in the last two weeks AI has been used to interfere with the New Hampshire Democratic primary election and steal $25 million from this Hong Kong comapny.

The only solution to this problem will involve baking crypto-protocols into everything.

Why crypto, you ask? Because, at least for now, a private key is the only thing in the world an AI can’t fake. And, for better or for worse, cryptocurrency is currently the only mass-market product in the world that puts self-managed PKI directly in the hands of millions of ordinary consumers and Internet users.

This will probably involve a combination of existing consumer crypto infrastructure like Metamask or Casa and hardware components, and integrating these into consumer Internet apps, requiring users on login, or when challenged, to sign all communications with digital signatures. It will also require (a) blocking anyone we don’t trust and (b) engaging in Keybase-style whitelisting of people we want to trust, such as our business contacts, so that when we get four of our colleagues on a Zoom call, and they’re all presenting valid digital signatures, we have their public keys pre-loaded, and as such can know that we’re (a) either talking to the real deal or (b) someone managed to get hold of all four private keys, which while not an impossible task, can certainly be made extraordinarily difficult with proper key management.

In the future we will likely wind up needing to carry at least some private keys everywhere with us as we move through the world, much like Imperial officers in the Star Wars universe carry cryptographic “code cylinders” on their uniforms.

Regardless of what form the cryptographic UI for this eventually takes, of this I am quite sure: to be safe from AI we have to make all of our communications cryptographically secure. And by all I mean all. Phones, videoconferencing, fax, e-mail, RSS, news and photographic wire services. Everything. Given the pace at which AI is accelerating, we have to do so very, very quickly.

DAOs are not useful, and will not be useful, for online extremism

Earlier this week, WIRED magazine published an article alleging that DAOs are potentially the next major hub for coordinated extremism online.

The article leads with:

The year 2024 might be the one in which neo-Nazis, jihadists, and conspiracy theorists turn their utopian visions of creating their own self-governed states into reality—not offline, but in the form of Decentralized Autonomous Organizations (DAOs).

Pravda WIRED

The author of the article, Julia Ebner, is an academic “extremism researcher” who writes books on European political movements and has apparently “infiltrated” (read: “attend publicly advertised meetups and Discord audio chats”) of a few of them. These include very controversial, and very public, organizations like Les Identitaires and Reconquista Germanica.

Academic research of extremist groups of this kind is comparatively straightforward because, for the most part, participants of such groups are a bunch of LARPing dorks who post edgy content for public consumption with no opsec while trying their best to stay out of jail. An indication that an “extremist” group is possibly not as serious an enterprise as, say, Hamas or Hezbollah is where the servers the group utilizes are based in the United States. In these cases, the FBI can get a grand jury subpoena doxxing a user of those servers in the space of an afternoon, if they even need one at all (many companies will render voluntary disclosure of these records in emergency situations posing a threat to life).

Reconquista Germanica would have been particularly vulnerable to this attack vector as the organization ran itself from a Discord group, and Discord, Inc. is a San Francisco-headquartered social media company whose eponymous application displays and stores all user communications in the clear (i.e. unencrypted), and thus these communications are freely disclosable to law enforcement, and often are disclosed. DAOs too, overwhelmingly use Discord for community management and outreach, including the allegedly right-coded “Redacted Club DAO” named in the Wired article.

I should be more impressed with Ebner’s assertions about DAOs if she had (1) mentioned a “DAO” other than ones which publicly advertise their Discord presence on Twitter, another U.S.-based platform. More impressive still would be (2) evidence, any evidence whatsoever, that any of the DAOs mentioned in the article employed cryptoprotocols, instead of Discord, to communicate. Most impressive would be (3) direct evidence that DAOs in particular were contemplated or being used effectively for nefarious purposes by such organizations. An example of a group that meets two of these three criteria would be the Taliban, which (1) doesn’t use Discord and (2) is known to use cryptoprotocols, mainly WhatsApp, to coordinate their lightning strikes against Kabul and other major Afghan cities during the U.S. withdrawal from that country. As to (3), to my knowledge, the Taliban, which enjoys total autonomy within the sovereign borders of Afghanistan and presumably is free to use any software tool it wants, does not use DAOs.

Ebner, writing in Wired, continues:

What are the stakes if trolling armies start cooperating via DAOs to launch election interference campaigns? The activities of extremist DAOs could challenge the rule of law, pose a threat to minority groups, and disrupt institutions that are currently considered fundamental pillars of democratic systems. Another risk is that DAOs can serve as safe havens for extremist movements by enabling users to circumvent government regulation and security services monitoring activities.

This is absurd.

Members of extremist groups of the type Ebner studies live and work freely in Western societies. They also happen to hold opinions that some members of polite society find repellent. Most of the time, at least in the U.S., holding extremist beliefs and expressing them is not a crime. If anything, having extremists post in Discord communities is useful as an early warning system for law enforcement, who monitor these forums; the only people who consistently argue that these communities’ very existence, even where legal, is dangerous to society come from academic/journalistic extremism and “misinformation studies” circles, ideological opponents to freedom of expression, and their political allies.

The reality of the situation is that, in the real world, if you are dumb enough to plan a serious crime or pose a serious challenge to rule of law on a public Discord, chances are good that law enforcement is all over it and that you will go to prison.

When we see largely peace-loving, crypto-nerd, not-racist, “DAOs” use virtually identical communications facilities, we should not also conclude that this makes crypto people extremists, or that this makes DAOs friendly to extremists, or even that DAOs are appropriate for extremists. It means that DAOs, like many other online communities which use Discord and make it one of the most popular social media applications in the world, including political movements, all emphasize participation over secrecy. Adding a DAO into the mix does not create a “safe haven” from anything, and certainly doesn’t “circumvent government regulation and security services (sic) monitoring activities.” Quite the opposite, in fact.

What DAOs actually do

I have some experience with DAOs, having helped design the first Ethereum prototype of one in 2014, and advised a number of others since. Their principal role is not to communicate. It is to manage on-chain smart contracts and decide when certain administrator-level permissions on those contracts, such as setting interest rates or changing the feature set, should be exercised, amended, added, or removed.

DAOs are not “self-governed states.” They are self-governed software applications. Most of the time, DAOs are half-baked. The DAO part of the puzzle is often simply bolted onto an application to justify the sale of a cryptotoken to pre-fund the DAO founders so that they can get some runway to sling new code and figure out product-market fit.

Rarely, such as in the case of projects like MakerDAO, the project has tight product-market fit on the first attempt or very close to it, and token holders will periodically swing in to vote on a proposal. Even in those cases, “governance portals” where relevant communications on these votes take place exist in the open and observed by token-holders who will not want to “dox” themselves and create a user account in order to participate, although many large token holders who are in a position to dictate the outcome of proposals choose to dox themselves anyway.

As a general rule, by the time a proposal for such a change actually gets agreed on and implemented, considerable discussion about the proposal has already occurred. These debates are, overwhelmingly, conducted on the surface web, in the clear, where they can be monitored by law enforcement agencies with very little effort on the agency’s part, if desired.

The social media piece of the puzzle is no different from current social media communications. The DAO part is even more poorly suited to criminality and concealment given that (a) smart contracts are all publicly examinable onchain, (b) blockchain transaction data on the most popular EVM chains where the overwhelming majority of DAOs live is unencrypted and ingested by massive machine-learning analytics engines by companies like Chainalysis which work directly with law enforcement on a daily basis and (c) for the most part the only thing DAOs do is coordinate on smart contract state changes.

These state changes are only communicated to the chain after a rough consensus is reached among the voting DAO participants on the state change, which often involves a lengthy and drawn out debate about boring financial, cryptoeconomic and computer science issues. By contrast, the dissemination of “extremist” thought on the Web usually relies on the maximum-volume-and-velocity, and minimum-interference, transmission of edgy memes/propaganda, something which is not economically practicable onchain given that it would be prohibitively expensive to fill up a block with a gif, nor is it something which requires consensus to be achieved before pushing an update transaction to a globally distributed finite-state machine with a money-token. Even e-mail would be more effective for this use-case.

If “extremists” want a tool to spread their propaganda, a DAO is not something they should want to use. It’s the wrong tool for spreading propaganda. It’s the right tool for reaching consensus on whether to move a smart contract interest rate 50 bps, and to confirm that consensus by furnishing a cryptographically secure proof of voting power that will be automatically executed by the underlying L1 blockchain once a certain threshold of votes has been attained.

When extremist groups like the Taliban, rather than a bunch of loser schizoposters on Discord, start using DAOs instead of using WhatsApp for their communications, something which, for the reasons given above, will likely never happen, we can have this conversation. For now, anyone who knows anything about DAOs knows they are neither used by nor useful to terrorists or extremists in any way, shape or form. Real journalism of the type practiced by our fathers and our fathers’ fathers before them is not the same thing as making up random, defamatory, fact-free conjecture about an industry of brilliant hackers who are trying to make the world a better place, as has been done by Wired in this instance.

AI Breaks the World, Crypto Fixes It, Part III: How Crypto Can Solve the Joe Biden AI Deepfake Problem

This is a direct follow-up to two previous posts: first, The “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”, and second, AI is digital abundance, crypto is digital scarcity, and the world needs both.

Yesterday, U.S. Senator Elizabeth Warren posted the following on Twitter:

Senator Warren can be forgiven for thinking that crypto is money. Those of us who have worked in the space for awhile know that it is much more than that. “Crypto” as such is not merely money but rather it is the union of a finite state machine with a unit of account that humans wishing to access that state machine can treat as money, or, as put more simply and elegantly ten years ago by Ian Grigg, simply “a state machine with money.”

This access, and indeed all aspects of the permissioning and functioning of such systems, can be linked to the payment of fees and the payment of fees is secured not by accounting and billing departments, but rather by public-private key authentication.

Which brings us to today’s story about another prominent Democrat, Joe Biden. Or rather, a story about an AI apparition of Joe Biden:

New Hampshire is having a primary election this week. The Democratic iteration of this was particularly controversial as the national party successfully blocked RFK Jr. from appearing on the ballot and then attempted to shut the primary down. As a result, Joe Biden does not appear on the primary ballot and instead is running a write-in campaign in order to secure the state’s votes for his reelection campaign.

Someone who was clearly unhappy about this state of affairs launched an extremely illegal voter suppression campaign where an AI-generated, robocalled voice pretended to be Joe Biden and instructed voters to stay at home on Tuesday.

How does this relate to crypto, you ask? First, read the two blog posts I linked to above. Second, understand from those blog posts that the problem of AI abundance is not going to go away – it is going to get worse and it is going to get considerably more difficult to discern from the real thing.

There are several possible responses we, as a society, can mount to AI deepfakery. One of them is to ban AI, which is unlikely to do much good in the long run considering that there are many countries which will not also ban it, and it is generally speaking impossible, unless our civilization collapses and the tech is forgotten, to put this technological genie back in its lamp.

We must therefore find a way to filter the AI out of our lives. The only way to do this is to use technology which an AI cannot, presently or for the foreseeable future, fake. One such technology is a public-private key.

Here’s how such an app might work. Users could set up a crypto wallet address at a .BTC or .ETH domain. That address could be integrated into the phone system and a social phone book where users can add their pubkey addresses and communicate it to their contacts on their iPhones, and accept their contacts’ pubkey addresses in return. Every “call” made to a phone line would come with corresponding cryptographic proof indicating that the call had been signed by the user’s private key together with the date/time or some other timestamped data which could not be repeated. Users could add their favorite politicians’ public key addresses (such as Elizabeth Warren or Joe Biden, if that’s your thing) to their whitelisted addresses, and public chains could act as easily verifiable registries of those addresses. Thus any phone number which called and held itself out as Joe Biden which wasn’t Joe Biden would be instantly flagged and/or blocked.

Yes, cryptocurrency is a money technology. It is used as a substitute for money because its cryptographic attributes make it an outstanding, nearly-unbeatable truth technology, and since money is one of those fields where people lie whenever they can get away with it, it is naturally the first place it found widespread use. But truth is a problem in many domains, and with the spread of human-equivalent AI fakery, truth will soon be a problem in every domain. We can use cryptocurrency to prove other truths other than the basic question of the double-spending problem, the first thing that crypto fixed.

As a truth technology, cryptocurrency has one key advantage over some new cryptosystem which is designed from scratch: cryptocurrency is already in the hands of millions of Americans and is waiting for other applications from legacy infrastructure to incorporate it into their services and thereby improve them. Onboarding is not required, because it has already happened.

AI and crypto are mirror image technologies; crypto fixes the problems that AI creates. If we want to secure the 2024 election, the United States needs to adopt and integrate public blockchains into 20th century technologies and do so with dispatch, instead of trying to ban them.