AI Breaks the World, Crypto Fixes It, Part IV: The Great Zoom Robbery

This is a follow up to three earlier posts:

  1. AI Breaks the World, Crypto Fixes It;
  2. AI Breaks The World, Crypto Fixes it, Part II:the “AI Misinformation” problem can be completely solved by cryptocurrency-based “proof of human”; and
  3. AI Breaks the World, Crypto Fixes It, Part III: How Crypto Can Solve the Joe Biden AI Deepfake Problem.

And now for our fourth installment in the series, we encounter a story about an audacious heist in which an AI was used to steal $25 million from a Hong Kong company by faking the entire senior management team on a Zoom call:

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.

I have rammed this point home in my three prior posts, but will do so again: the advent of AI means we cannot trust anything we see online.

See e.g. this (unverified) report where an engineer allegedly used ChatGPT to convince desperate men to buy Soho House memberships to score a date, with an entirely AI-generated “8/10 brunette marketing specialist who went to Penn State.” She would supposedly say “you know, my favorite date is espresso martinis at Soho House” (cost of membership: $2,500-$4,800), the men would buy the memberships and she, not being real, would be a no-show.

It sounds unbelievable, sure. But considering we’ve got a verified report from CNN that a bunch of AI scammers managed to trick a company in Hong Kong into sending them $25 million it’s a lot more believable today than it would have been a year ago. Except, in this case, the alleged fraud was designed to juice the value of call options he claimed to have bought on Soho House Inc.’s shares:

Even if untrue, the prior report would in principle be very easy and nearly free to replicate.

Somewhat more believable is this report, where an engineer apparently used ChatGPT to aggressively filter out dates and even to court the woman he eventually proposed to:

Bots of course, and ChatGPT, are nothing new on dating apps. Also, on Tinder/Bumble/Hinge/whatever, there really isn’t a solution to this problem that cryptography can provide – if you have a single user willing to lie about who they are in order to scale up their own personal communications capacity, then you’re going to have a hard time fixing that with crypto. If you lend your identity to a bot, then you’re deliberately tarnishing the integrity of your identity. The only way you’re going to be able to tell who your counterparty is, if they’re doing that, is by meeting them in person. I’m just glad I got off of dating apps before they turned into a totally useless dystopian hellscape.

The question is how we increase the threshold of verifiability in unencrypted (telephone) and other mostly non-cryptographic (Zoom) communications protocols where there aren’t incentives to cheat in this way and where there are incentives for people to be jealous guardians of their private keys, such as in business. Already in the last two weeks AI has been used to interfere with the New Hampshire Democratic primary election and steal $25 million from this Hong Kong comapny.

The only solution to this problem will involve baking crypto-protocols into everything.

Why crypto, you ask? Because, at least for now, a private key is the only thing in the world an AI can’t fake. And, for better or for worse, cryptocurrency is currently the only mass-market product in the world that puts self-managed PKI directly in the hands of millions of ordinary consumers and Internet users.

This will probably involve a combination of existing consumer crypto infrastructure like Metamask or Casa and hardware components, and integrating these into consumer Internet apps, requiring users on login, or when challenged, to sign all communications with digital signatures. It will also require (a) blocking anyone we don’t trust and (b) engaging in Keybase-style whitelisting of people we want to trust, such as our business contacts, so that when we get four of our colleagues on a Zoom call, and they’re all presenting valid digital signatures, we have their public keys pre-loaded, and as such can know that we’re (a) either talking to the real deal or (b) someone managed to get hold of all four private keys, which while not an impossible task, can certainly be made extraordinarily difficult with proper key management.

In the future we will likely wind up needing to carry at least some private keys everywhere with us as we move through the world, much like Imperial officers in the Star Wars universe carry cryptographic “code cylinders” on their uniforms.

Regardless of what form the cryptographic UI for this eventually takes, of this I am quite sure: to be safe from AI we have to make all of our communications cryptographically secure. And by all I mean all. Phones, videoconferencing, fax, e-mail, RSS, news and photographic wire services. Everything. Given the pace at which AI is accelerating, we have to do so very, very quickly.

Discover more from Preston Byrne

Subscribe now to keep reading and get access to the full archive.

Continue reading