Is AI dangerous? If you’ve been listening to the likes of Elon Musk or Sam Altman, you might think that, left to its own devices, AI is a Skynet-level threat to humanity. Thankfully, their fears aren’t matched by the broader consensus of AI programmers. We still don’t have an AI that can drive a car well enough to be legal, let alone take on humanity. So, while we do need to keep an eye on rogue AI, we should take these AI CEO public declarations of fear for what they really are. PR stunts. But this doesn’t mean AI isn’t dangerous. It has the potential to be one of the greatest tools of human depravity we have ever had the misfortune to invent, as one unfortunate CFO recently found out when he lost $25,000,000 in a recent AI heist. But is there a way to fight back against AI crimes like this?
So, how did an AI steal tens of millions of dollars? Well, it started with an email.
An unnamed Hong Kongese CFO received a suspicious email that claimed to be from his UK counterpart CFO asking to conduct a secret transaction. They were no dummy and thought it was a phishing attempt straight away. But, whoever was sending the email wanted to do a group Zoom, and when the Hong Kongese CFO joined the video call, he saw and heard several colleagues he recognised. With his concerns put to bed, they got to work organising the transactions, and he transferred $25,600,000 in 15 separate transactions to the accounts specified by his UK counterpart CFO.
No one had the faintest idea what had happened until another employee checked with the head office and discovered the UK office had made no such transaction request and had not received any money. Imagine the nauseating feeling of devastation and guilt the CFO must have felt when they found out.
So what happened? Well, it turns out the only real person on that video call was the Hong Kongese CFO. The rest were deepfake AIs (don’t know what deepfakes are? Click here to find out) based on publicly available data on the UK CFO and the other employees on the call. With nothing more than that data, a basic AI program and some dastardly ingenuity, these scammers were able to swipe enough cash to buy a brand-spanking new megayacht!
Now, I can already hear some of you scoffing at this story and loudly declaring this CFO must be an idiot, as deepfakes as easy to detect as they all look off. But you might want to hold your horses, as the statistics don’t agree with you. A recent study has found that people cannot reliably detect deepfakes, meaning most people are just guessing. This is even more worrying, as this same study also found that people tend to mistake deepfakes for authentic videos more than vice versa. In other words, deepfakes are now extremely good at fooling you.
Considering that the video and audio quality of the supposed transcontinental Zoom call would have been so low that it would have hidden any giveaway artefacts in the video or audio, you can see how the CFO was deceived.
So, if deepfakes are already fooling us at such monumental scales, how can we fight back?
Well, there are two ways.
Firstly, you can poison the AI. You need a lot of data to make a convincing deepfake. As many photos as you can find, reams of video of how the person moves their face, and plenty of clear audio of them speaking. Most deepfakes get this data from publicly available sources, like social media posts uploaded by the person or company they work for. However, there are programs out there like Nightshade and PhotoGuard that can modify these files in a way that we can’t detect but which screws with the AI’s algorithm, rendering the deepfake useless. For example, Nightshade will trick the AI into thinking it is seeing something other than a face in a photo, and this misidentification can throw off the machine learning programs behind deep fakes.
Using these programs on all photos and videos you, or your employer, posts of yourself online can protect you from being deepfake cloned. This is far from foolproof though; these programs are caught in a game of cat-and-mouse. The AIs are getting better at seeing through these modified files, and these deepfake fooling programs must develop new, novel ways to trip them up.
The second, and arguably more robust, way to protect from deepfake scams is to not rely on such a lone and vulnerable source of identity verification. The Hong Kongese CFO took the video call as a sure-fire form of verification and didn’t even call up head office or anyone else in the UK branch to get a two-stage verification of authenticity. There are even programs which use private key encryption to verify the authenticity of someone’s identity online. Having multiple authentication steps like these makes pulling off such a scam near impossible and is something all corporations should be implementing immediately.
So, next time you are on a Zoom call, or you answer a phone call from a colleague, family member or friend, remember that the person on the other end might not be who they seem to be. Especially if they ask you to make secret transactions amounting to $25,600,000 into 15 bank accounts you have never heard of.
Thanks for reading! Content like this doesn’t happen without your support. So, if you want to see more like this, don’t forget to Subscribe and follow me on BlueSky or X and help get the word out by hitting the share button below.
Sources: CNN, ArsTechnica, Web 3 Universe, Bloomberg, WE Forum, Norton, The Guardian, iScience, MIT Technology Review, 1Kosmos, IEEE Spectrum