FinCEN Warning U.S. Banks Due to Rising Deepfake Identity Fraud
Author: admin | 14 Nov 2024In This Post
The FinCEN has revealed a warning to banks and other financial institutions for rapidly increasing AI-powered identity frauds and deepfakes. This alert was released on November 13, 2024, to draw attention to the mounting use of generative artificial intelligence by criminals to produce deepfakes of media that bypass identity verification systems and are used as a conduit for fraud.
FinCEN further revealed that mistrustful activities have been reported from the different financial institutions that raised in recent days and months. However, the report explains that deepfake images are involved in generating fraudulent identity documents. Even fraudsters are using GenAI to change or generate fake images of identification documents, such as driving licenses or passports. According to FinCEN, these deepfake images either by altering the original ones or by coming up entirely new, are employed to bypass all the kinds of traditional identity verification.
GenAI is an exciting new technology that holds great promise for innovation, but it also opens the door to a new level of exploitation by bad actors,” said FinCEN Director Andrea Gacki. Deepfake-generated media has caused great anxiety among regulators and financial institutions. According to Gacki, financial institutions need to be as vigilant as possible in detecting fraud arising from deepfakes and report suspicious activities for the protection of the U.S. financial system and consumers against identity theft and fraud.
The Growing Issues of AI-Driven Fraud and Cybercrime
The financial institutions revealed that criminals are also using deepfake images after combining them with the pilfered PII or fake data to generate fabricated identities. This integration of manipulated media and fraudulent information severely challenges the currently deployed identity verification and authentication methods. This alert is part of a broader trend whereby generative AI technologies have not only become a tool for identity fraud but also for cybercrime in general. There is a possibility that the number of cybercrime activities has increased in recent times because the use of AI chatbots to write sophisticated malware codes has become prevalent. HP Wolf Security recently came up with a report showing that AI tools are used in developing remote access Trojans, meaning the malware is being evolved and distributed differently.
As these tools become more accessible, there is increasing concern that the democratization of cybercrime might shift in favor of those doing this. FinCEN encourages financial institutions to continue to be vigilant, adopt enhanced verification protocols, and cooperate in reporting any suspicious activity associated with deepfakes or synthetic identities. With advancements in AI, both businesses and consumers will need to stay abreast of the evolving new ways fraudsters will exploit.