AI-based Fraud is on the Rise: There are Five Developments You Need to Know

  • Lovro Persen, Document & Fraud Director at IDnow

  • 02.05.2023 11:15 am
  • #ai #fraud

Artificial Intelligence (AI) and Large Language Models (LLMs) have dominated the news agenda recently in the form of one tool in particular: ChatGPT. AI can be a tremendous force for good, particularly in the fight against digital fraud and other types of financial crime as more and more organisations leverage it to digitise their Know Your Customer (KYC) processes and ensure that the person at the other end of the transaction is who they say they are. 

These tools can automate the analysis and processing of user data, personal information and transaction history and remove bottlenecks in onboarding, allowing for scale at an unprecedented rate. They can also check users against sanctions and politically-exposed persons (PEP) lists and flag potential fraud and suspicious activity in real-time.

However, history has taught us that with every technological innovation and development, a new criminal endeavour will not be far behind it to exploit newly-regulated gaps and legal loopholes. Recent developments in AI and LLMs are unfortunately no different. Here are five things you should know about AI-based fraud.

1. It makes social engineering fast and easy

The boost in productivity offered by ChatGPT and similar tools has caused widespread excitement but its use in fraudulent activity is sadly inevitable. Impersonation and social engineering fraudsters have already begun using AI technology in romance and phishing scams as they produce fake documents such as invoices, contracts and financial statements that are frequently more personalised and accurate than those from the sources they are impersonating. It is important to look in the detail; fake documents may contain spelling or grammatical errors which might make their fraudulent status easier to spot, but the power of AI means that these mistakes on a fraudulent document may now be rectified, making it harder to differentiate between a real document and one produced with dishonest intentions.

2. Cybercriminals no longer need technical know-how

Seasoned scammers and cybercriminals would previously have needed to have a good understanding of software and code to produce documents or sites which fooled innocent people into handing over their hard-earned money. Now, criminals with a basic level of IT literacy and no knowledge of coding can engage in cybercrime thanks to the simplicity of the AI tools. Bad actors can create malware, malicious VBA scripts and realistic chatbots which prime the scammer’s victims, quickly and easily.

3. Sophisticated document forgery is easier than ever

LLMs are not the only rapid developments in AI. Tools which can create convincing-looking images, video, audio and 3D models are also getting closer and closer to the public domain. This means that, very soon, fraudsters will be able to create ultra-realistic documents, such as passports and other official papers, which will even feature the traditional hallmarks currently used to authenticate genuine forms of identification. Identity theft and obfuscation will become much easier as these generative AI models allow a fraudster to commit realistic attacks on identity documents and the biometric information contained therein. Verifying which documents are authentic, and in the case of deepfake technology, which people are real, is likely to cause problems for companies without the most rigorous liveness checks and facial recognition technology in place.

4. Chat history puts data privacy at risk

One of the many twists and turns in the story of ChatGPT came in March 2023, when the Italian Data Protection Authority took the platform offline in their country. Their reasoning was that it contained a bug which made users’ chat history and personal identifiable information (PII) visible to others, enabling its use in fraud. Personal data such as names, passwords, addresses and whatever else a user might put into the LLM could all have been used to impersonate individuals online and commit synthetic identity fraud.

5. Accurate, unbiased information will be harder to spot

While this is not directly related to instances of fraud, the veracity of information will increasingly be of a high priority as the development of AI and LLM tools progresses. The threat of unsubstantiated ‘fake news’ is likely to grow exponentially alongside the increasing volume of authentic-sounding content. This is a warning which should also be applied to database checks. Companies which need to perform KYC and anti-money laundering (AML) checks are advised to not rely solely on LLM when making them. The responses to sanctions and PEPs can be easily manipulated, giving inaccurate results.

Related Blogs

Other Blogs