How the Finance Industry Can Beat Fraudsters at Their Own Game
We’ve all witnessed the extraordinary rise of ChatGPT in recent weeks, with discussions around the power of Artificial Intelligence (AI) dominating discourse across many industries. While there’s no denying that the ongoing development of AI has allowed the financial services industry to enhance and transform its ways of working, fraudsters can also exploit this technology for their own gain.
Fraud is always at its most virulent during economic downturns and crises. In fact, in the first nine months of 2022, over 309,000 cases were recorded to the National Fraud Database, a 17% rise compared to the previous year. This increase was mainly driven by the rise in false application and identity fraud, up by 45% and 34%, respectively.
It’s perhaps no surprise that fraudsters will see new AI-powered technologies, such as ChatGPT, as a golden ticket to exploit vulnerable people. Others may turn to opportunistic fraud if the resources are available to them, highlighting the need for organisations to be investing in both predictive and preventative technology if they are to protect consumers.
How fraudsters are using AI
In the first half of 2022 alone, UK Finance revealed that criminals stole a total of £609.8 million through authorised and unauthorised fraud and scams. The same report shows that these figures were even higher during the pandemic. It’s therefore no surprise that the financial services industry is constantly looking for ways to strengthen security protocols and protect consumers, with tools such as two-factor identification and biometric authentication becoming the norm.
However, professional fraudsters are also seeking to find new ways to exploit consumers - something the finance industry needs to respond to quickly. Just recently, an investigation by Which? revealed that criminals are increasingly intercepting one-time-passcodes delivered via sms - putting customers at risk. They identified further weaknesses as insecure passwords, lax checks on new payees and vulnerable log-in processes.
While fraudsters recognise that they are unable to readily fake an account holder’s fingerprints or face, they are now turning to AI-enabled social engineering tools to generate a brand new or synthetic identity, creating fake bank accounts and then committing fraud.
The aforementioned ChatGPT has taken the internet by storm, with the AI-powered chatbot able to interact with users in a conversational way within a matter of seconds. Experts working in the finance industry have noted that the tool could be used for enhancing banks’ customer service or marketing efforts. Yet experts have also highlighted how ChatGPT can be a gateway to fraud. The chatbot can enable scammers from all over the world to craft emails that are so convincing they can get cash from victims without relying on malware or other unscrupulous techniques.
Unsurprisingly, speculation is rising around what else the tool could enable - especially when developments are moving faster than regulation. The next question for the finance industry is how to proactively get ahead of the game and prevent fraud before it occurs. The answer lies in the same technology - AI.
AI to the rescue
It’s important for organisations to join up their defences. Having a robust, enterprise fraud framework in place will enable fraud to be identified and prevented across all channels. For example, through the deployment of AI, banks can analyse customer-related and behavioural-based data, set up alerts and automate case management. This will help to create a holistic overview of fraud risk, as well as enhance accuracy.
For years, banks tended to rely on rules-based technology to spot fraud risk. However, as fraudsters have got smarter these rules need to be continuously updated and tuned if they are to be effective. AI comes to the rescue here. The implementation of a model development framework will allow a business to import and execute rules via a real-time decision engine.
Through the use of Machine Learning (ML), organisations can also be automatically alerted to any concerning changes in a person’s transaction history or behaviour - catching criminals in the act when masquerading as a customer.
We are already seeing some organisations introduce adaptive machine learning techniques - which build on traditional ML to process large amounts of real-time, rapidly changing data - an approach which certainly needs to become more mainstream across the sector in order to catch fraud before it’s too late.
AI-powered technology can also detect cases of false or synthetic identity in real-time, to ensure fraudulent activity is immediately stopped. For example, should a fraudster apply for a loan or credit, or seek to withdraw funds, using a fake identity this will be flagged and investigated.
Similarly, AI allows organisations to stay one step ahead of fraudsters, producing a comprehensive fraud risk assessment - a targeted fraud landscape review with a focus on identifying current sources of fraud losses, process leaks and other pains.
At SAS, we always test our AI models against challenger models and then optimise them as new data becomes available. When new scams arise, our systems immediately know.
Customers are often left with limited options after falling victim to fraud, particularly if it occurs through a customer authorising a transaction themselves. More regulation could help here, but educating the public is of equal importance. With tools such as ChatGPT on hand, typos in emails are no longer the first indicator of a scam. The industry should look to invest significantly in both fraud detection and prevention technology if they are to avert a rise in fraud.