News 21 May 2024

Buyers Guide

Complete playbook to understand liveness detection industry

Learn More
AI Deepfake Scam Costs British Engineering Firm

AI Deepfake Scam Costs British Engineering Firm ‘Arup’ £20m in Video Call

Author: admin | 21 May 2024

Another instance of AI deep fakes, exposes the alarming evolution of cyber deception in the fast-paced digital world, as the boundary line between reality and manipulation becomes increasingly blurred. One of the world’s leading consulting engineering firms, Arup, has announced that a deep fake video duped an employee into transferring HK200 million ($20M).

The scam unfolded in February, spurring Hong Kong Police to meticulously investigate the matter, however, the company name was unknown at that time. The employee working as a firm’s clerk was deceived into sending a wire transfer and revealed that she received a call from an imposter imposed as the senior officer of the company. 

Deepfakes in video calls are advancing with each passing day, becoming more and more realistic, making it challenging to detect fabricated identities. Cybercriminals often steal or somehow acquire the identities of the company’s top executives and seek illegitimate access to the systems or accounts. Imposters create fake scenarios by impersonating the appearance or voice of the executive to lure employees into believing they are getting calls from genuine authorities. 

The company has confirmed that deepfake images and voices of the senior officers were deceptively used to compromise internal systems. Rob Greig, Arup Global Chief Information, uncovered that the firm has encountered a wave of attacks including deep fakes, stressing how international firms are becoming embroiled in AI-generated media manipulation. 

He reportedly added, “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.

How Can We Collectively Counteract this Menace? What Role Businesses Could Play? 

The emergence of AI deep fake technology poses a serious threat to businesses, emphasizing the need to establish advanced technology accurately authenticating the legitimacy of the users. As deepfake technology advances, so must the preventive measures. Businesses must navigate legal frameworks and ethical considerations before protecting their interest, as the rise of deepfake technology ignites questions about these complexities. 

Arup’s deepfake video scam serves as a stark reminder of why heightened vigilance and staying ahead of the curve is crucial in counteracting the ever-evolving cyber threats. 

Businesses are mandated to develop advanced and sophisticated AI-trained solutions or use secure mediums filtering genuine and fake identities to allow access to authorized individuals only. Implementing sophisticated software that can authenticate the legitimacy of the individuals during video or conference calls could effectively outwit impersonators from entering into systems. The proliferation of deepfake scams provokes serious concerns about how many businesses have deployed effective measures to spot inconsistencies and implemented liveness detection checks to confirm the liveness of claimed identities. 

What SRA Update Says About the Rise of Deepfakes?

Solicitors Regulation Authority has recently updated its ‘Sectoral Risk Assessment’ on anti-money laundering and terrorist financing, showcasing the key aspects of legislation governing AML regulations in the United Kingdom. However, the document also highlighted how growing reliance on video-based ID verification heightens the likelihood of impersonation attacks on AI deepfakes.  

The document largely shed light on the significance of on-site customer onboarding in accurately ensuring user authenticity. Effective customer onboarding also plays a vital role in complying with customer due diligence standards and minimizes the chances of impersonation attacks. SRA stringently recommends businesses to deploy software solutions that can precisely verify users’ identity during video calls or remote due diligence, to detect and prevent looming threats of deep fakes. 

By investing in advanced technologies and implementing effective preventive strategies, businesses can significantly reduce their susceptibility to evolving cyber threats and deep fakes. 

Published
Categorized as News