emailfacebookinstagrammenutwitterweiboyoutube

AI: A preventer or perpetrator of fraud?

How is artificial intelligence being used to combat fraud and how is it being harnessed by criminals to commit it? Is it a threat or a promising tool? Jeremy Asher, consultant regulatory solicitor at Setfords, provides a breakdown and shares his thoughts.

Jeremy Asher|Setfords|

With the advent of ChatGPT and — going back slightly further — voice assistants like Alexa and Siri, the growing presence of AI in our day-to-day lives is all too apparent.

Another way AI is being increasingly used nowadays is to combat fraud. For example, it is claimed that Nets’ and KPMG’s AI-powered anti-fraud engine, Nets Fraud Ensemble, can reduce fraudulent transactions by up to 40%, a statistic that is not atypical for such solutions.

However, like with any form of technological advancement, criminals have found a way to harness AI for the opposite purpose. AI scams are on the rise, with law firms, other organisations and individuals in the firing line. This begs the question: is AI really the ideal tool to prevent fraud, or is it now more of a perpetrator? Let’s investigate.

How does AI help combat fraud?

From detecting nefarious activities to preventing them entirely, AI can help combat fraud in many ways.

Fraud detection

Via machine learning (ML), AI is an incredibly useful tool for detecting fraud. It is able to analyse a vast number of transactions to uncover fraud patterns, which it can then use to detect such behaviour in real time. AI does this significantly faster and more accurately than the old rules-based approach to fraud detection.

Fraud prevention

AI can then be used to prevent fraud, with anti-fraud AI solutions trained to be able to block suspicious transactions in real-time. Another way AI has been used to stop fraud is by powering biometric and voice recognition technologies. These help banks and other financial institutions verify the identity of individuals, preventing fraudsters from accessing their accounts.

Future insights

Unlike rules-based fraud prevention systems, which are purely proficient at analysing past fraud patterns, AI solutions can also offer future insights. This is because machine learning constantly learns from data analysis results, allowing it to detect emerging patterns.

How is AI being used to perpetrate fraud?

Unfortunately, criminals are finding ways to use AI to commit fraud in the following ways:

To bypass authentication procedures

Generative AI programs can be used to clone voices and create fabricated images and videos, enabling scammers to bypass the very authentication procedures AI helped introduce in the first instance. Indeed, in my role as credit industry fraud avoidance system and digital fraud expert at Setfords, I am frequently approached by people who have either been scammed or have had their data used in order to make false applications for finance or banking facilities.

To generate phishing emails

AI tools like ChatGPT have made phishing emails even more convincing than before. Darktrace has reported an increase in the linguistic complexity, volume of text, punctuation and sentence length used in such emails since the release of the chatbot. Consequently, ChatGPT “may have helped increase the sophistication of phishing emails, enabling adversaries to create more targeted, personalised and ultimately, successful attacks.”

To forge documents

Fraudulently created documents pose a huge risk to auditors and other third parties tasked with verifying information about an individual or entity. Generative AI programmes are able to create everything from bank statements to accounting information using far less effort than traditional forgery methods and with greater authenticity. This enables fraudsters to easily spread false information for nefarious purposes.

So, should AI be considered the preventer or perpetrator of fraud?

For as long as there have been assets to steal, fraudsters have always found new and inventive ways to steal them. AI-enabled fraud is no different. While it’s true that AI has increased the number of ways fraudsters can target others, it’s also true that it has improved our ability to prevent them from doing so. So, how do we reconcile these two facts?

Extra safeguards can be implemented

Firstly, it’s important to realise that there are already certain ways of circumventing AI-enabled fraud. One obvious way is through stronger multi-factor authentication (MFA) procedures. This can be a simple way of preventing voice cloning fraud, for example, by ensuring mimicking somebody’s voice isn’t enough to defraud them. More law firms should, therefore embrace MFA and similar safeguards and should be encouraged to do so by financial institutions and the like.

New protections will emerge

Secondly, just as there are always fraudsters looking at new ways to scam us, there’s always fraud prevention specialists thinking up new ways of stopping them from doing so. There will never not be a motivation for regulators and governments as a whole to prevent fraud, so the public can rest assured that new solutions to mitigate the threat of AI will emerge.

But, while AI is able to spot patterns, sequences and suspicious behaviour, it fails to understand context — it is no more than a tool and should not be relied upon in isolation. Human interaction is important for interrogation, and organisations using AI should still ask questions of the suspect to decide whether the data gathered is reliable or not. In my experience, there are often innocent explanations missed by organisations investigating fraud, and in those circumstances injustice can be caused to the individuals concerned.

So, in response to the question of whether AI should be considered a preventer or perpetrator of fraud, the answer, as always, is that it’s ultimately those committing and preventing these crimes that are fulfilling these roles. AI is just another means both sides are embracing.

LPM Conference 2024

The LPM annual conference is the market-leading event for management leaders in SME law firms

SMEs vs Big Law: The tech race

Navigating tech advancements as an SME law firm