Blog 08 Aug 2025

Try Now

Get 10 FREE credits by signing up on our portal today.

Sign Up
Top 5 Deepfake Incidents You Must Know

Top 5 Deepfake Incidents You Must Know

Author: admin | 08 Aug 2025

Deloitte’s 2025 Cybercrime Outlook puts AI-facilitated fraud potentially costing businesses and consumers more than US$40 billion per year by 2027. Deepfake technology has evolved from a niche entertainment tool to become one of the most serious threats to cybersecurity, personal security, and public trust. Where once it was a hobbyist curiosity, AI-facilitated manipulation now underlies convincingly mimicking anyone’s family members, celebrities, politicians, or business leaders. 

Sam Altman, the CEO of OpenAI, recently warned of an imminent fraud crisis and the obsolescence of voice-based authentication systems, which are widely used in the telecom and banking sectors. The number of deepfakes reported globally rose from half a million in 2023 to nearly 8 million in 2025, a 1500% increase in just two years, according to cybersecurity firm Surfshark. As incidents spread to the social, financial, and even physical realms, it is essential to understand where and how they occur.

This blog showcases five of the most underreported deepfake cases in 2024-2025, exposing the latest developments in deception and manipulation.

Deepfakes by the numbers

Malaysian VIP Investment Deepfakes: Politicians’ Faces Are Used to Drive Global Scams

The Royal Malaysia Police dismantled a complex network of con artists in mid-2025 who were using AI-generated videos of prominent politicians and business figures, including MP Teresa Kok, Elon Musk, Donald Trump, and Malaysian Prime Minister Anwar Ibrahim, to promote phony investment schemes. Spread via Facebook, WhatsApp, and Telegram, the deepfake showed these influential people committing to fraudulent trading platforms that promised unheard-of profits.

The victims of deepfake fraud were tricked into sending large sums to foreign accounts, many of them losing all of their life savings, because they were persuaded by the legitimacy that such well-known names conveyed. Police emphasised that the deepfakes were very realistic and that the perpetrator was often using real speech patterns and settings that matched the politicians’ actual interviews.

With evidence suggesting involvement of transnational Cybercrime networks, researchers dubbed the operation one of the most well-planned deepfake scams in Southeast Asia. In order to combat AI-facilitated fraud, this case emphasises the need for strong content verification systems and international law enforcement cooperation, as well as how deepfake undermines public confidence in leaders.

An Elon Musk Deepfake-Driven Romance Scam 

On January 24, 2025, a woman in Lafayette, Louisiana, was targeted by a video that pretended to be Elon Musk and lost more than $60,000 in the ordeal. The scam started innocently enough on threads when she had a pleasant chat with someone she thought was Musk. It eventually escalated to a video call where she was presented with a very realistic AI-generated replica of the billionaire. Unlike most deepfake scams, there was no vice cloning; just the visuals were convincing enough to eliminate all skepticism. 

The imposter promised to send her a Tesla and a substantial cash gift through FedEx, but said she first had to pay processing fees and logistics in advance through gift cards and cash payments. She spent her life savings in a matter of days before friends stepped in and warned authorities. Federal investigators verified that the money was unrecoverable. 

This case demonstrates the emotional potency of deepfake images, where even absent fake images and videos can be used to abuse trust and desire, particularly in a love-stricken context such as romance scams.

George Clooney Video Deepfake Scam

In January this year, an Argentine woman was scammed out of more than E10,000 after conmen employed a Hollywood actor George Clooney deepfake video to establish a parasocial bond and trick her into putting money into a fake business. For six weeks, the victim chatted and called what seemed like a friendly, entertaining Clooney. The deepfake video mimicked Clooney’s characteristic facial movements, eye blinking, and speech rhythm with unnerving precision, making the exchange seem real. The scammers forwarded her videos in which Clooney seemed to address her directly, lending credence to their plea for money to fund a human aid mission.

This case demonstrates the growing complexity of celebrity deepfake scams, which exploit fans’ sentimental attachment to well-known figures. The scam employed time-based grooming, which progressively increased the victim’s confidence until they felt comfortable parting with substantial quantities of money, unlike typical one-time frauds. It is a sobering reminder that video, which was once the gold standard of authenticity, can no longer be trusted to be real until confirmed, especially when ascribed to financial or personal demands.

The Yoav Gallant Deepfake Broadcast 

Evening of April 14, 2025: Audience of Israel’s Channel 14 saw a stunning betrayal of trust: the network inadvertently broadcast a deepfake video of erstwhile Defense Minister Yoav Gallant in the midst of a live news segment. The video, allegedly produced by Iranian agents, had Gallant speaking Hebrew with an unbelievably thick Persian accent and making politically loaded remarks regarding the U.S. military actions. In a matter of seconds, the anchor sensed something was amiss and cut off the broadcast, declaring the video “cooked” and having nothing to do with Gallant.

The network subsequently publicly apologized. The assault constituted the first reported incident of a deepfake ever broadcast live on national television. Underscoring how effortlessly state-sponsored propaganda can find its way into trusted channels of information. More significantly, it showed that visual deception by itself, without even any synthetic voice-over work,  can subvert public confidence and ignite geopolitical tensions on an unprecedented level. 

The Molly-Mae Hague’s TikTok Deepfake 

In June 2025, social media influencer Molly-Mae Hague, with her huge following on the web, denounced in public a deepfake TikTok video that used her image to advertise an upmarket perfume named Arabiyat Prestige Nyla. The video was an exact replica of her promoting the product. Numerous complaints and financial losses resulted from the deception of thousands of her fans into purchasing the perfume from unlicensed resellers. Reiterating that she had nothing to do with the product, Hauge warned her fans about the hoax on Instagram.

This case highlights a rising e-commerce threat: influencer trust-based deepfake celebrity endorsements, which leave fans and the influencers themselves to suffer the consequences. It also highlights a sobering fact: even legitimate old content can be repurposed into believable scams.

Trends Found in the Most  Recent Deepfake Attacks

  • Taken as a whole, these cases illustrate how deepfakes have grown from web curiosities to tangible threats, ranging from financial fraud, celebrity impersonation, political misinformation, and manipulating public infrastructure. Fraudulent losses are expected to hit $40 billion per year in the U.S. by 2027, propelled by the democratization of AI tools that enable exceptionally convincing video clones for anyone to use. 
  • Experts emphasize that individual consciousness is the best shield. Emergency verification code can be set up by families, people can restrict public voice and video samples, and companies need to invest in biometric liveness detection and AI-driven deepfake detectors. Governments, meanwhile, are frantically trying to bridge legal loopholes, with the likes of the Take It Down Act in the U.S. and increased digital impersonation punishments in the EU, seeking to safeguard victims.

How Facia Helps Businesses Find and Stop Deepfakes

  • Deepfakes are revolutionizing the digital threat landscape. Ranging from AI-created celebrity impersonations to politically motivated deepfake videos and influencer brand abuse, these attacks underscore the pressing need for real-time deepfake detection.
  • Facia is eliminating deepfakes everywhere they show up, not just at the door. Businesses can monitor and flag manipulated images on social media, messaging apps, and digital content channels with its off-site deepfake detection solution. Facia helps brands safeguard their reputation before the harm spreads, whether it’s a phony executive video or a widely shared AI-generated controversy.
  • Trust is crucial in today’s environment, and facia empowers you to take control of it. Keep up with online fraud. Use facia to identify deepfakes instantly.

Frequently Asked Questions

How has the frequency of deepfake incidents changed in recent years?

Deepfake incidents have surged in recent years due to advancements in AI and easier access to deepfake tools. According to reports, the number of cases has increased significantly year over year, doubling or even tripling in certain sectors.

What industries or sectors have been most targeted by recent deepfake incidents?

The industries that have been most impacted include politics, entertainment, cybersecurity, finance, and corporate fraud. These sectors are vulnerable to reputational harm, disinformation, and impersonation.

How effective are current deepfake detection methods in combating the rising number of incidents?

TMany deepfakes can be detected by current detection tools, but sophisticated AI-generated content frequently evades them. Effectiveness is increased by combining AI detection with biometric verification and real-time content moderation.

Published
Categorized as Blog