Artificial intelligence has touched nearly every facet of our lives in recent years — from smart assistants and self-driving cars to wearable health devices and software development. But those same tools that make our lives easier also come at a cost: increasing the threat of AI-driven fraud.

5 Ways AI Is Being Used for Fraud

  1. Combining real and fake data to create fake identities.
  2. Forging passports and ID documents to circumvent identity verification and security procedures. 
  3. Conducting phishing scams at scale.
  4. Voice cloning to redirect bank funds.
  5. Creating deepfake videos for scams.

Bad actors are now using AI to combine real and fake data to assume convincing fake identities, forge passports and ID documents and proliferate false information. With fraudsters continually embracing new technologies and techniques, understanding these latest threats can feel like a fulltime job, and the huge number of fraud-prevention jobs currently available shows how keen companies across all sectors are to tackle the issue of fraud head on. We’re going to run you through the basics of the rising threat of AI-driven fraud and what you can do to combat it.

 

How AI Is Being Used for Fraud

 The increasing use cases for artificial intelligence (AI) seem almost limitless — including fraudulent activities.

An example of fraudsters using AI can be seen in the creation of synthetic identities. This is where fraudsters combine fake and real data or use real data from a range of different sources to create a fictitious identity. AI can help with the forging of passports and ID documents, meaning that fraudsters are better equipped to get through companies’ identity verification and further security procedures.  

Fraudsters aren’t only manipulating AI algorithms and models to create fake identities and generate false information. If it feels like phishing emails are flooding your inbox and are more convincing than before, it’s likely due to AI helping fraudsters undertake phishing campaigns at scale. AI is also supporting them to conduct fraudulent transactions and carry out arbitrage betting, which, while not illegal, is certainly damaging to sports betting companies’ business models.

The use of artificial intelligence in relation to biometrics is also on the rise. Fraudsters are cloning voices of everyone from business leaders to friends and family members of their victims to carry out a sophisticated range of scams. Generative AI is also being used to create deepfakes, with realistic voice and video fakes being used for a range of nefarious purposes. 

In the US, individuals have reported being victims of banking scams where voice cloning software has been used to try and redirect funds into different accounts. And there are countless other examples of individuals being targeted by such scams.

More on AIWhy We Can’t Ignore the Dark Side of AI

 

How to Fight Back Against AI-Driven Fraud

 Businesses are countering the rising threat of AI in multiple ways. Awareness training for staff and customers is key to this. 

As AI-driven fraud becomes more commonplace, staff must be vigilant in their attempts to spot it and customers need to be aware of the latest scams and how they can defend against them. Providing information on such topics needs to be done regularly for customers. Banks are a good example of this in action, with email and SMS alerts and pop-ups when customers access apps and online accounts all being used to raise awareness of scams. Some banks have also implemented a mid-transaction pop-up for users transferring funds using their apps, to remind customers to be fraud aware. 

Frequent staff awareness raising sessions should also be a core part of any modern business’ approach to fighting fraud. These include training staff on fraud techniques such as phishing and on the latest approaches that fraudsters are taking, such as using voice cloning software to pose as senior leaders or banking contacts. Staff must be trained to be vigilant in spotting any attempts to conduct transactions or share information in ways that bypass normal business processes, no matter who seems to be asking them to do so. 

There are also various technology-driven solutions that companies can implement to try and fend off fraudsters. One example of this is transaction monitoring, which is a largely automated screening process for purchases, money transfers and various other business interactions. It can be carried out as a periodic review or in real time, with the latter being a prerequisite of instant payment systems. Transaction monitoring can flag suspicious individuals and transactions, providing businesses with an important tool to detect and prevent AI-driven transaction fraud.

Businesses are increasingly embracing AI as a cybersecurity measure as well. In 2022, AI-enabled financial fraud detection and prevention strategy platforms attracted a global business spend of just over $6.5 billion, according to a report from Juniper Research. Clearly, it’s not just fraudsters who are making the most out  of AI, but also those seeking to stop them.

AI can fight a range of fraud attempts. It can detect account takeovers and fake account creation. It can help prevent card fraud. It can also spot credential stuffing and the use of betting bots. These multiple applications mean that AI-powered fraud fighting tools can be used in a wide range of business settings and sectors. 

While the use of AI to fight fraud is becoming widespread, it doesn’t mean that a single solution suits every business. Indeed, the more localized the fraud-fighting approach can be, the better. This has led to the creation of fraud-fighting models that use machine learning to mold themselves to companies’ needs. 

For example, machine learning can be used to suggest risk rules based on a company’s own specific transaction and fraud data. The AI can learn the company’s particular context, then make suggestions for fine-tuning the system and the rules used to flag potential fraudulent activity. Over time, this fine-tuning can iron out false positives and false negatives, leading to ever-greater accuracy in detecting attempts at fraud.

More on AIHow AI Can Fight Phone Fraud

 

Future of AI Fraud and Fraud Prevention

AI is becoming more widely used for a huge range of purposes. Globally, Precedence Research valued the AI market size at $454.12 billion in 2022. The firm expects the value to shoot up to $2,575.16 billion by 2032, with an impressive compound annual growth rate (CAGR) of 19 percent over the decade.

This growth means that we can reasonably expect fraudsters to continue finding new nefarious uses for artificial intelligence. Its booming use in relation to stolen identities and the creation of new, synthetic identities is one concern. Identity theft is already a major problem, with around 15 million Americans falling victim to it every year, according to a report from Fortunly.

With AI increasing the scale and pace at which fraudsters can generate new identities, including those using stolen details, this is an issue that every business needs to be alert to over the coming years.

Thankfully, the growing use of AI means that fraud fighters can continue innovating, too. AI can spot patterns in data that are invisible to the human eye – or that would take days for humans to spot, while AI can detect them in milliseconds. The ability to create systems that learn from individual businesses’ own experiences of fraud also means that fraud fighters have a powerful base to develop the AI-driven fraud detection and prevention tools of the future.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us