![](https://assets.isu.pub/document-structure/240709182537-4c3ed04c7f2e9127e080afe0dce2d0c9/v1/70f1d4fb024c464948f984210753f443.jpeg?width=720&quality=85%2C50)
2 minute read
How AI-powered chatbots became fraud’s latest weapon
The ways fraudsters are operating is changing and evolving. Rather than seeing solo players, we’re now seeing highly organised teams of criminals, who act more determinedly, and are better coordinated at finding gaps to exploit.
![](https://assets.isu.pub/document-structure/240709182537-4c3ed04c7f2e9127e080afe0dce2d0c9/v1/cd5fec4cc65901649a96eac578155ca5.jpeg?width=720&quality=85%2C50)
Advertisement
Perhaps the greatest weapon in fraudsters’ arsenals though is AI bots which offer new opportunities for fraud and a new challenge to retailers. According to UK Finance, £177.6m was stolen through impersonation scams in 2022. As AI impersonation improves through the use of deepfakes and recreating voices, the risk of fraud will only increase further.
To combat these schemes, businesses need to stay informed and understand how criminals are leveraging AI-powered bots to amplify attacks; how deepfakes and synthetic personalities are evolving and posing a threat to businesses, and finally (and most importantly) how businesses can fight back.
How criminals are leveraging AI powered chatbots
In the past, scammers and fraudsters’ options were limited in terms of resources. They typically relied on themselves and their ability to deceive people, and would often give up once blocked.
Now, however, fraudsters are entering into highly organised teams, and their ability to deceive is being enhanced by AI.
For businesses operating online, generative AI (GenAI) is making it more difficult than ever to distinguish genuine customer enquiries from fraud. One example of this is attempts to gain access to account information and credit card details. To achieve this, we’ve recently seen fraudsters creating phishing templates generated through ChatGPT and other GenAI tools.
These “chatbots” use AI to impersonate legitimate customers. One way they do this is by replicating customer service queries, using vast amounts of data to recreate speech and text patterns of real people. Complicating this further is deepfake technology which can convincingly replace a person with an AI-generated likeness.
How deepfakes and synthetic personalities are evolving
Deepfakes are adding complexity to the fraud landscape, with attackers impersonating victims to make high-value transactions. By creating synthetic identities and replicating victims’ voices when calling customer services these deepfakes are so convincing that they’re being used to secure goods with full card details and billing information. Even videos can be adapted with lip-syncing techniques so advanced it makes it difficult to distinguish real from fake.
I strongly believe deepfakes will continue to be a growing challenge and, in the future, we could even see bots making calls on their own, with no human oversight, if we don’t curb attempts now. The risk to businesses and consumers is huge, so responding to these sophisticated systems will demand high-performance machinelearning models built into business’ tech stacks. Increasingly, security is about being able to match the speed and scale of criminals by using AI for good.
To fight fire with fire though, one must learn what tools to use.
How businesses can protect themselves against fraud
Risk intelligence teams like the one I lead are paramount to providing protection from AI fraud. By evaluating different forms of fraud and collaborating closely with data science teams, they can feed information into the models, cross-referencing with previous consumer behaviours to continue evolving defences as fraudsters pivot to new tactics.
Building resilience to AI fraud relies on merchants working closely with intelligence teams to identify anomalies and input them into the right feedback loops. This allows systems to begin “learning,” and identifying fraudsters becomes faster and easier. This feedback loop also empowers merchants to pinpoint users who are repeatedly committing returns fraud through several fake accounts, identifying them through link analysis, like IP or device information and implementing triggers to block them.
With the rise of AI chatbots, businesses have new threats to worry about, but the good news is that as threats evolve, so do the solutions. First and foremost, businesses need to ensure they have a robust fraud prevention strategy in place. That could be partnering with a fraud prevention provider to mitigate risk, establishing a data intelligence team to monitor behavioural trends, or developing a robust fraud prevention framework that incorporates both in-house capabilities with strategic industry partnerships that allow them to continue focusing on what matters: customer loyalty, retention, and profit.
Xavier Sheikrojan Senior Risk Intelligence Manager, Signifyd.
Daoud Abdel Hadi, Lead Data Scientist, Eastnets.