The report by RBC showed that Canadians, while concerned about AI-powered fraud, are not taking enough steps to protect themselves

Article content

As artificial intelligence (AI) develops further and fraud schemes become increasingly convoluted, Canadians are more concerned than ever about the effect the nascent technology could have on fraudulent activities, says a recent poll conducted by RBC.

“People have always used new technology, and especially unfamiliar new technology, to great effect,” said Jonathan Anderson, associate professor and cybersecurity expert at Memorial University in Newfoundland. “This is kind of the same scam, except it’s on steroids because it’s a whole lot more convincing for people who don’t know to be skeptical of what they hear and see.”

Advertisement 2

Article content

Article content

Examples of AI-augmented fraud

Intricate scams have begun relying on AI to deceive their victims. In Ontario, a man was convinced to give up $8,000 after fraudsters used AI to mimic the voice of his friend. According to the story by CTV News, the voice of his friend over the phone said he needed $8,000 for bail after being arrested for texting and driving.

In Newfoundland, a man tricked eight people into giving up nearly $200,000 to AI-mimicked voices of their grandchildren. According to the story by the CBC, the fraudster collected information about some of the grandchildren and used it to create a more convincing profile. The 23-year-old pressured his victims using manufactured scenarios like needing bail money or legal fees.

In Hong Kong, a more bizarre scheme involved staging an entire video conference using AI deepfakes, constructed through publicly available footage of key figures at the company. The goal of the fake conference was to convince an employee, the only real person in the call, to give up important company information. Following the meeting, the company was defrauded for more than $3.4 million, according to an article in the South Morning China Post.

Article content

Advertisement 3

Article content

When it comes to untargeted schemes, for the better part of a decade, deepfaked images of celebrities attached to bogus stories, with the intent of getting victims to invest in a variety of cryptocurrencies, have been ubiquitous on social media sites.

More Canadians concerned about fraud than ever before

Despite these looming threats, the majority of Canadians still believe themselves capable of detecting illicit AI-powered schemes; this despite most not taking any additional action to combat them, the poll showed.

“Twenty-eight per cent of people said they were taking proactive steps to combat fraud,” said Kevin Purkiss, vice president of fraud management at RBC. “What worries me is that (people) may be overconfident in what they’ve done to prepare and, based on my experience, I would suggest that more steps can absolutely be taken to make sure people are keeping themselves safe.”

The poll surveyed 1,502 Canadians and segmented the data based on region: Alberta, Atlantic Canada, B.C., Ontario, Quebec, Saskatchewan/Manitoba and Canada as a whole.

Respondents were asked if they agreed with the statement that, due to AI technology, there would be an increase in fraud, with 88 per cent of respondents agreeing. Eighty-nine per cent agreed that AI will make everyone more vulnerable to fraud; 81 per cent said they are more concerned about fraud over the phone, or vishing, and that AI will make it harder to detect; and 75 per cent of Canadians said they are more concerned about fraud than ever before.

Advertisement 4

Article content

“I think the prevalence (of AI-powered fraud) will continue to grow,” said Anderson. “But, it’s probably also true that we’ll reach a point where everybody knows not only do you not believe everything you read online, not only do you not believe every picture you see, but, you should also not believe everything you hear or every video that you see.”

The growth of social engineering fraud using AI

The poll also probed Canadians about six different fraud types and whether they noticed an increase in those schemes over the past year. The six scam types were phishing, spear phishing, vishing, deepfake scams, social engineering scams and voice cloning scams.

Vishing is similar to phishing, but instead of using emails, it uses phone calls and voicemails. On the other hand, spear phishing is a form of targeted phishing. Normally focused towards business and other high-value victims, spear phishing gathers intel about a target and uses that information to persuade the victim into giving sensitive information that the scammer can profit from.

“I think the slightly scary thing is that automating things tends to reduce the costs, or the barriers to entry,” Anderson said. “There was a time when when really effective social phishing techniques would have been things that you’d have to go after a high-net-worth individual because you’d have to spend a lot of time and effort figuring out who is in their social network etc… It’s pretty crazy what you can do for really cheap now, and I think that is likely to make (spear phishing) something that impacts more people, unfortunately.”

Advertisement 5

Article content

A majority of respondents noticed an increase in phishing, at 79 per cent, spear phishing also at 79 per cent, vishing at 69 per cent, social engineering scams as a whole at 57 per cent, and deepfake scams at 56 per cent. The only type of fraud that the majority did not agree increased was voice cloning scams at 47 per cent.

“If you build an AI platform, you can never stop people from using it for scams and frauds,” said Anderson. “But, you should at least be making an effort to not make it easy. If you can raise the cost of doing these scams… that means there will still be AI-augmented frauds and scams, but not necessarily against the vast majority of people.

“I think it’s important not to bring in regulation that makes it so heavy-handed that all AI development moves overseas, because that wouldn’t be a net-win either,” he added. “But I think there are things that platform providers could be doing and in some cases are (already doing.) Some companies are really seized with this question, and some are less seized with it.”

Recommended from Editorial

Advertisement 6

Article content

Phishing, spear phishing, vishing and voice cloning scams are all forms of social engineering where a malicious actor attempts to get their victim to share sensitive information using social skills or trickery, instead of brute force.

While there are security measures that you can employ to protect yourself against fraud, like multi-factor authentication and not sharing personal information, according to Anderson, many proactive measures that people can take are not as effective as they may hope.

“In terms of proactive measures that you could take to protect yourself, I’m not sure what those would be that would actually be effective and not just giving you a false sense of security,” Anderson said. “I think the key thing that people can do is be skeptical of the things that they read, hear, see etc.”

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our daily newsletter, Posted, here.

Article content



Source link nationalpost.com