Is AI Really Supercharging the Fraud Economy?

AI is making scams more believable than ever. Here’s what you should actually look out for.

Written by Brittany Allen
Published on Oct. 13, 2023
Is AI Really Supercharging the Fraud Economy?
Image: Shutterstock / Built In
Brand Studio Logo

There’s been a wave of headlines claiming that fraud is being supercharged by AI, which represents a new and outsized threat for business and consumers alike. Yes, fraud is a major concern: Consumers lost almost $8.8 billion in 2022 from scams alone, up 44 percent from 2021 despite record investment in detection and prevention. But are the concerns around AI in fraud being overblown? Do scammers really need to rely on deepfakes and voice mimicry to commit fraud?

What are scammers talking about on the dark web?

As part of my work, I spend time monitoring fraud on the dark web. Among fraudsters, I’m noticing not only a lack of conversation around using advanced AI technology to execute scams, but also active discussion of how many scammers feel that there’s no pressing reason to adopt advanced AI like personalized deepfakes.

Read more perspectives on AI and fraudAI-Driven Fraud Is on the Rise. Here’s How to Combat It.

 

How AI is Making Fraudsters’ Lives Easier

Generative AI is increasing the sophistication and believability of scams. Scams used to be riddled with grammar and spelling errors, so they were easier to spot. With generative AI, bad actors can more effectively imitate legitimate companies than in the past, which makes phishing attempts more convincing. Voice cloning and video deepfake scams are picking up steam, as seen in recent reports of scammers using AI-generated videos of Elon Musk to lure people into investing money into a non-existent platform.

Consumers are paying attention to these reports and drawing their own conclusions. In the last six months, 68 percent of consumers noticed an increase in the frequency of spam and scams, likely driven by the surge in AI-generated content, and 78 percent of consumers said they’re worried AI will be used to defraud them.

One trend we are seeing is what we’re calling the democratization of fraud — the increasing accessibility of easy-to-use tools for committing fraud.

And there has been a shift in fraud since the launch of ChatGPT last fall: According to Sift data, in the first quarter of 2023, the rate of account takeover attacks rose a staggering 427 percent compared to all of 2022.

This massive spike in ATO can be partially attributed to these AI-enhanced social engineering attacks, though other factors play a large role, like the democratization of fraud, which refers to the increasing accessibility and ease with which anyone, regardless of technical experience, can engage in fraudulent activities. So there’s no question that basic generative AI tools have made fraudsters’ lives easier.

But for fraudsters who don’t need to interact with potential victims (i.e. spammers), commit their scam without needing a live voice (such as when text or a bot voice works just fine), or who can reasonably approximate a convincing voice without mimicking it exactly, AI is just too much work.

For instance, many people who receive a panicked call from their “cousin” asking for money in an emergency, with a voice that sounds like it could be him — matching his age, gender, etc. — but distorted by a “bad connection” before he quickly switches to text messaging, are unfortunately still victimized by this scam without the involvement of advanced AI.
 

What Are Fraudsters Saying About AI?

As part of my work staying ahead of the curve when it comes to fraud, I spend time on the dark web and in Telegram channels where fraudsters discuss methods, sell fraud bibles (step-by-step instructions on how to scam a specific company or business, available for as little as $50) to others who would become bad actors, and boast about the scams they’ve run.

What I’m noticing is not only a notable lack of conversation around using advanced AI technology to execute scams, but also active discussion of how many feel their current methods are successful enough that there’s no pressing reason to adopt advanced AI like personalized deepfakes in order to be successful. That’s not to say that fraudsters aren’t excited about the potential of new AI tools; they are simply weighing the tradeoff between devoting time to perfecting new technology versus running their lower-tech scams that remain profitable.

Fraudsters are also vulnerable to being ripped off by other fraudsters. I even saw WormGPT (a blackhat alternative to GPT models), on their own Telegram channel, arguing that FraudGPT (another AI tool used to generate email and text phishing scams) is a scam. There is some hesitation to spend money on tools that may not work as advertised.
 

Do You Need to Worry About Deepfakes?

So does the average person need to worry about AI supercharging fraud and faking their face and voice? Not just yet. While fraudsters are using AI-enhanced fraud tactics, such as deepfakes of celebrities and high-profile CEOs, that technology is only useful in specific cases.

For example, if the AI-created deepfake can either hit a large number of potential victims (as with celebrity-imitating scams) or is leveraged against a particularly valuable target (like mimicking a CEO to commit business email compromise fraud, or installing ransomware at a large corporation), then the effort is worth it.

Other deepfakes, like those used to lure victims on dating sites to exploit them financially (called “pig butchering”), are worth the effort for scammers because they are generic and reusable across multiple accounts on multiple platforms.

But for now at least, most fraudsters don't need advanced AI to continue committing their scams.

Read more about cybersecurity17 Password Managers to Keep Your Information Safe

 

If Not AI, What Should We Be Worried about?

As mentioned above, one trend we are seeing is what we’re calling the democratization of fraud, which refers to increasing accessibility of tools for committing fraud. Take for example a bot-as-a-service scam, which operates through encrypted messaging apps like Telegram and is used to obtain one-time password SMS codes from victims.

The bot works by spoofing a company or financial institution’s caller ID to trick victims into providing their OTPs for anything from bank logins to payment service apps. Fraudsters can pay for use of the bot on a daily, weekly, monthly, or yearly basis. And while most victims won’t respond to these scams, the scalability of bots make it a profitable numbers game for fraudsters.

More experienced cyber criminals are capitalizing on this by turning their fraud skills into on-demand services for sale, known as fraud-as-a-service — and none of it requires generative AI.
 

How to Assess Your Fraud Risk

My advice to companies concerned about AI-enhanced fraud? Don’t fall for the AI hype unless there is a real chance it will affect their user community. Assess your fraud risk and respond realistically, while staying informed of changing technology.

For example, merchants can implement future-forward strategies and technology that’s capable of adapting with this changing landscape, such as the universal coverage of a real-time global network of fraud data. Machine learning provides the best approach by detecting patterns of abuse and signals from a diverse network.

From account defense to payment protection to dispute resolution, machine learning can detect account anomalies that are indicative of suspicious activity. Decision engines can apply custom rules based on risk scores, such as dynamic friction (i.e., enforcing multifactor authentication) or blocking risky transactions.

AI-assisted phishing attacks are scalable and easy to execute. The deepfake voice-phishing attack you read about? Not yet.

Hiring Now
Zone & Co
Fintech • Professional Services • Software • Consulting
SHARE