It was last spring when Paddric Fitzgerald received a telephone call at work. He had been playing music via his phone, so when he picked up, the voice of his daughter screaming that she had been kidnapped erupted over the speakers.
“Everyone has those points in their lives like ‘Oh, that moment I almost drowned as a kid’,” he says. “It was one of the most emotionally scarring days of my life.”
Declining an offer of a firearm from a colleague, Fitzgerald, a shop manager based in the western US, raced to get cash out from a bank, while staying on the phone.
“[His daughter] was screaming in the background, saying they’d cut her while I was waiting in line,” he says. “I was going to give everything that I have financially.”
It was only a chance text from his daughter that revealed that the voice on the phone didn’t belong to her. It was a remarkably cruel and elaborate scam generated with artificial intelligence.
Fitzgerald’s story is a terrifying example of how AI has become a powerful new weapon for scammers, forcing banks and fintechs to invest in the technology to keep pace in a high-tech arms race.
“I had no protection over my child in that moment — a year later, I’d love to find that person and just make them realise how evil what they did was, and they did it with keystrokes,” says Fitzgerald. “Are we really that advanced as a society if we can do that?”
The continued evolution and uptake of the technology means scammers do not just pose a threat to the unaware or vulnerable. Even cautious consumers are at risk of huge financial losses from AI-powered fraud. FT Money explores the latest developments.
Increasing sophistication
Identifying the scale of AI use by scammers is a difficult task, says Alex West, banking and payments fraud specialist at consultant PwC. He was one of the authors of a report into the impact of AI on fraud and scams last December in collaboration with cross-industry coalition Stop Scams UK. This identified the kind of “voice cloning” that targeted Fitzgerald as one of biggest ways in which criminals are expected to use AI.
“Scammers are already very successful, and it could be that they just don’t need to use this type of tech, or it could be that they are using AI and we just aren’t able to distinguish when it has been used,” he says. “[But] it’s clearly going to drive an increase in more sophisticated scam attempts.”
Steve Cornwell, head of fraud risk at high street lender TSB, says the rising sophistication of the technology was a major worry for banks.
“If you think of the way Generative AI is coming along, how long [is it] before that AI solution could have a real-time conversation with you [using] a synthetic voice?” he says.
Figures from banking industry trade body UK Finance show a welcome trend, with fraud losses falling by 8 per cent year on year in 2022.
But one senior politician who did not wish to be named says that increased adoption of AI — OpenAI’s ChatGPT reached around 100mn monthly users in two months — could reverse that trend.
“Scammers are very well financed and entrepreneurial,” the person says. “That’s the thing I’m concerned about.”
Data from Cifas, a not-for-profit fraud prevention service in the UK, also gives cause for concern. While data from 2022 shows identity fraud rose by nearly a quarter, reports of AI tools being used to try and fool banks’ systems increased by 84 per cent.
“We’re seeing an increased use of deepfake images, videos and audio being used during application processes, along with synthetic identities being identified as a result of ‘liveness’ checks that are now being carried out at the application stage,” warns Stephen Dalton, director of intelligence at Cifas.
Speaking at Davos on Wednesday, Mary Callahan Erdoes, JPMorgan’s head of asset and wealth management, said the use of AI by cyber criminals was a big concern. The bank spent $15bn on technology annually in recent years and employed 62,000 technologists, with many focused solely on combating the rise in cyber crime.
“The fraudsters get smarter, savvier, quicker, more devious, more mischievous,” she added.
PwC and Stop Scams also identified artificially generated videos, better known as deepfakes, as a major risk. The technology, which only emerged in 2019, has rapidly advanced, says Henry Ajder, an expert on AI-generated media, who has advised companies including Meta, Adobe and EY.
“What’s happened in the last 18 months is the equivalent of like two decades of progress compared to the previous four years,” he says. “The barrier to entry is much lower than it was.”
The quality of these videos has improved remarkably, says Andrew Bud, chief executive and founder of online identity verification provider iProov. He points to a recent study which found that more than three-quarters of participants were unable to identify deepfakes.
“Good quality deepfakes cost about $150 on the dark web,” he continues. “You have a whole supply chain developing for AI-supported fraud, with R&D departments who build sophisticated tools and monetise them on the dark web.”
Natalie Kelly, chief risk officer for Visa Europe, warns there is a constellation of criminal-focused systems, such as WormGPT, FraudGPT and DarkBART. She says: “It can be hard to tell the authentic from the artificial these days.”
Using those tools, available via the dark web, and communicating via dedicated hacking forums and messaging apps, criminals are able to offer malware-writing services or advanced phishing emails.
How the scourge spreads
Financial institutions have long criticised social media platforms as vectors for fraud. Last summer, a deepfake of money saving expert Martin Lewis and X owner Elon Musk spread across social media, promoting a product it referred to as “Quantum AI”.
Lewis himself took to the X platform, formerly called Twitter, in July to warn about the scam. Some of the videos, aimed at a British audience, featured apparent BBC broadcasts, which were deepfakes of prime minister Rishi Sunak extolling the benefits of Quantum AI.
While a number of the videos have been removed or are inactive, other accounts simply copy and paste the same material.
The AI is not perfect. In one video which has now been removed, the purported Sunak stumbles over the pronunciation of words like “provided”.
And despite the high-tech name, the operation is surprisingly manual. Links from the deepfakes lead people to hand over their telephone numbers. Call centre operatives then take over, persuading people to hand over money.
Nevertheless, West emphasises that for criminals, scams are a volume game, and AI can tip the balance in enough cases to make it believable.
More on artificial intelligence
The FT’s hub for comprehensive coverage on artificial intelligence and machine learning. Follow for new perspectives on how AI technology is shaping the future of business and finance
“Making content more believable — and convincing just a small percentage more people — can have a big pay-off for the fraudster,” he says.
One such case was a former medical assistant in the US, who fell victim to an investment scammer on X who used AI to impersonate Elon Musk.
“This started back in March, and we only became aware in August, after she had already taken very large sums out of her retirement account to try to pay [the investment] account,” says one family member.
While the process began using direct messages, the criminal also used a filter to take on Musk’s appearance, video calling the victim to convince her to hand over almost $400,000 which she would supposedly invest in X.
Meanwhile, on Alphabet’s YouTube, a spate of fake bitcoin giveaways featuring an AI-generated Michael Saylor led the former MicroStrategy chief executive to release a warning on X. “Be careful out there, and remember there is no such thing as a free lunch.” The deepfakes, posted by a host of accounts which have since been banned, were labelled as “live” videos to make them more believable.
Ajder says platforms have taken steps to fight back against the increasing flow of AI-generated content. In September, TikTok announced users would be required to label AI-generated content, while Google’s DeepMind announced a watermark for AI images in August.
But Ajder was also wary of the record of social media companies, which have often implemented apparently clear policies in a piecemeal fashion. A lack of resources leads to ineffective enforcement, he says.
The UK government’s stance on AI and fraud has been mixed. In a speech in July, Financial Conduct Authority chief executive Nikhil Rathi mentioned the potential impact of AI on “cyber fraud, cyber attacks and identity fraud” in a speech on regulating new technologies. At a Treasury select committee hearing in December, Rathi also warned that criminals were “making unfettered use of AI” to manipulate markets.
The FCA says that “as AI is further adopted, investment in fraud prevention and operational and cyber resilience will have to speed up”.
But the government did not explicitly mention the technology in its anti-fraud strategy last May or in a voluntary “online fraud charter” for Big Tech platforms revealed in November.
The Home Office says that it is “working with our partners across government, law enforcement and the private sector to further drive the use of AI to tackle crime and protect the public.”
Meta says that deepfake ads are not allowed on its platforms, and it removes such content when it is brought to its attention, adding that the company is “constantly working to improve our systems.”
YouTube’s misinformation policy bans doctored or manipulated videos. The platform announced last year that it will begin requiring creators to disclose realistic altered or AI-generated material.
Fighting back with AI
The situation is not entirely bleak. Although scammers’ use of AI is on the rise, so is its adoption by institutions, ranging from banks, Visa and Mastercard to dedicated technology companies.
“If the human eye can’t tell if it’s real, it’s not game over,” says Bud at iProov. “It’s a cat-and-mouse game, and we’re evolving as fast as the bad guys to stay ahead.”
He says that his company’s technology, which is designed to help combat deepfakes and other forgeries, had been used by clients including the Californian Department of Motor Vehicles, the UK Government Digital Service and major banks in South America.
Another start-up using AI to counter fraud is US-based Catch, which aims to help vulnerable adults by detecting email scams, explaining the red flags and recommending next steps.
“The cash that [older adults] tend to lose is a lot more valuable to them — on average the cheque size they lose is higher and if they’re retired, they don’t have the time to make that money back,” says co-founder Uri Pearl.
AI is also being used by banks to support an army of staff assessing potential breaches of anti-money laundering regulations.
“There are lots of very cool small companies coming up in the area of ‘know your customer’,” says Gudmundur Kristjansson, founder and chief executive of Icelandic fintech Lucinity. “We’re doing a lot of development with generative AI.”
One of Lucinity’s products, nicknamed Lucy, takes data and crunches it into an easily readable format, speeding up what has traditionally been a highly manual process of monitoring transactions.
Loss of trust
But even these advances may not be able to defend against some areas of attack.
“For voice, it seems to be game over — it’s increasingly clear there is no defence,” says Bud, as the small amount of data in audio files makes it hard for tech companies to distinguish the real from the fake.
And the impact on victims of AI-driven scams goes beyond financial losses, potential or real. Fitzgerald says that his experience has soured his view of technology. He avoids ecommerce and most social media platforms, and says he is more comfortable withdrawing money at the bank and spending it than using a card.
“That phone call made me realise how vulnerable I am and how vulnerable our kids are,” he says. “I didn’t understand it was a possibility that could have happened.”
Human error
AI has provided fraudsters with powerful new technology. But one of the factors that enables scams remains the same as ever.
“Humans are always the weakest link,” says Natalie Kelly, chief risk officer for Europe at payments group Visa. “You can’t always protect yourself — the most common misconception is that you’ll never fall for a scam.”
Scammers like to target our emotions, she adds, with the vulnerable especially at risk.
Uri Pearl, co-founder of fraud detection software Catch, says that romance and relationship scams are particularly devastating.
“They can last for weeks and they are the most dangerous. Our system does catch them but a lot of the time, the victim themselves doesn’t believe it even if their family members tell them.”
The use of systems like OpenAI’s ChatGPT had improved the quality of language in the emails sent in these scams, he adds, although a large number are still being written in broken English.
“I imagine those are going to start to decline faster as the well designed ones rise more quickly,” he warns.
Despite these developments, Andrew Bud, chief executive and founder of online identity verification provider iProov, says that consumers could still play a role in protecting themselves.
“Authenticate the other person as you would expect to be authenticated by a bank . . . Ask them something only they know, send them a text message and get them to read it back,” he says. “Challenge your assumptions, and think about how you’re going to figure out if a person is a copy or not.”
Ajay Bhalla, Mastercard’s president of cyber and intelligence, also cautioned that those looking at deals online should remain cautious, particularly over purchase scams.
“If something is 50 per cent cheaper than you can buy it elsewhere and wants you to do [an] online bank transfer, you’ve got to start asking questions.”