Blog 27 Mar 2026

Try Now

Get 10 FREE credits by signing up on our portal today.

Sign Up
How Deepfakes Are Used to Bypass KYC Onboarding: A Technical Breakdown

How Deepfakes Are Used to Bypass KYC Onboarding: A Technical Breakdown

Author: admin | 27 Mar 2026

The KYC onboarding system exists to prevent unauthorized access by fraudsters. Users now prefer the system because of its connection to Generative AI. The pandemic created a permanent shift which established remote identity verification as the standard method of identification. Financial institutions and crypto platforms and lenders and digital services all developed online customer onboarding systems which used selfies and document uploads and liveness prompts to verify customers instead of face-to-face methods. The system provided actual user accessibility. The system created new security threats which required protection measures.

Synthetic faces and virtual camera tools together with API injection techniques have created a situation where all components of the eKYC system security can be defeated through common software products. The World Economic Forum released its Cybercrime Atlas on January 2026, which included the research publication Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes that examined 17 face-swapping tools and eight camera injection tools to show that most tools were able to bypass standard biometric onboarding checks. Criminals are actively combining these methods to get through live KYC verification at financial institutions worldwide.

The blog explains the functions of cyber attacks, their detection challenges, which stem from changing security standards, and their current requirements for detection systems.

What KYC Onboarding Actually Checks

The eKYC process executes three sequential verification tests, which include document verification to confirm ID authenticity through its machine-readable capabilities, liveness verification to determine whether the subject is alive rather than displayed through photos or recorded videos, and face matching to assess whether the person matches their identification documents.

Every or any check corresponds potentially to a deepfake attack. It may not be necessary to fake them all; just the fulfillment of one of the checks may be good enough to sail through onboarding.

The Three Attack Vectors

Before diving into each attack vector individually, here’s how a typical deepfake KYC bypass unfolds in practice: 

What KYC Onboarding Actually Checks.

1. Synthetic Identity Creation

Fraudsters use open-source generative models such as StyleGAN to create photorealistic synthetic faces, which they embed into fake identity documents that they create using templates. The identity created for the person contains no records of fraudulent activities, and the person has no entry on watchlists because they designed the identity to display no such records. The document achieves successful results in Standard OCR and visual checks because its structure, typography, and MRZ data work according to proper formatting standards. The person did not exist, so there are no breach records that could cause a security alert to activate.

The Deloitte Center for Financial Services predicts that U.S. fraud losses will increase from $12.3 billion in 2023 to $40 billion by 2027 because of generative AI, which will use synthetic identity fraud during onboarding as a major method of fraud.

2. Face Swap and Virtual Camera Injection

The attacker connects to the KYC selfie capture flow through a virtual camera driver , software that intercepts the video pipeline and feeds a deepfake stream in place of the real camera output. Passive liveness checks, which only confirm the frame looks like a human face, cannot separate real human faces from realistic synthetic facial overlays. The WEF report tested this against live front-camera selfie KYC flows and found virtual camera injection bypasses work against a wide range of active liveness implementations. Fraudsters use video feed manipulation to create fake technical problems, which they use to delay or restart their sessions. 

3. API-Level Injection Attacks

The most advanced vector executes a complete bypass of the camera system. The attacker intercepts the API call between the user’s app and the KYC verification service, substituting a forged image directly into the data stream after the capture point. The device authentication process succeeds because a physical person presence used the device. The document check passes because the injected image clears inspection. The two faces belong to separate individuals who cannot be seen through standard presentation attack detection (PAD) methods, which do not monitor substitution activities that occur during data transmission. The WEF report identifies injection as a growing priority threat, noting that attackers are already shifting tooling toward it as active liveness adoption increases. CEN/TS 18099 now defines injection attack detection (IAD) requirements to address this gap.

Why Standard Liveness Checks Fall Short

Liveness detection existed as a valid authentication method during times when criminals used printed photographs to deceive security systems. Modern deepfake tools replicate blink patterns, head turns, and micro-expressions, which security systems use to verify user identity through active liveness prompts that require users to produce those specific signals. These weaknesses don’t just exist in theory, they show up as observable patterns during onboarding:

Red Flags of Deepfakes Kyc Fraud.

The FBI’s IC3 2024 Annual Report recorded $16.6 billion in reported internet crime losses in 2024, which showed a 33% increase across the year, because AI-enabled identity attacks acted as one of the contributing factors.

According to a UNESCO study on fake media, up to 46% of fraud fighters have faced synthetic identity fraud.

Organizations tend to overlook an important gap that exists because the arms race between adversaries is not balanced. A fraudster can create a new method for generating synthetic faces within a period of two days. A KYC vendor updating its detection model operates on a much longer release cycle. Fraudulent accounts get created during that delay. Single-frame or short-clip liveness checks confirm something is moving; they don’t confirm that the video signal is authentic, that the capture path hasn’t been intercepted, or that the content wasn’t generated by a model.

What a Proper Detection Stack Needs

Presentation attack detection across video sequences

The analysis requires multiple frame analyses to identify two types of unnatural movements and one type of environmental lighting mismatch. The single frame provides minimal information to us.

Injection attack detection at the capture path

The analysis engine cannot detect any alterations made to the biometric signal, which had been transformed before reaching the engine. IAD verifies the input originated from a legitimate, untampered hardware source; both the WEF Cybercrime Atlas and CEN/TS 18099 treat this as non-negotiable.

Document forensics

Synthetic documents leave traces through three types of evidence, which include metadata that fails to match actual camera footage, rendering artifacts, and the lighting differences between the embedded photo and document surface. The system detects only facial features, which prevents it from identifying the complete synthetic identity vector.

Model freshness

The system uses Generative AI to explore new possibilities and make adjustments to its design. The model fails to detect attacks that use tools that have been developed after its training on two-year-old datasets. The detection system requires current fraud attempts as training material because it cannot rely on static benchmark data.

The Regulatory Requirement

Financial institutions must use FIN-2024-DEEPFAKEFRAUD when filing SAR reports according to FinCEN’s FIN-2024-Alert004 requirement, which mandates banks to study this document whenever deepfake media is used during their customer onboarding processes. The alert defines two specific red-flag indicators, which include customers who report ongoing technical issues during their remote verification process and customers who provide identification documents that do not match their device or geographical data.

The EU AI Act requires safety assessments for all biometric verification systems that are classified as high-risk. eIDAS 2.0 requires high-level liveness detection for European KYC processes. 

How Facia Addresses These Attacks

Facia’s deepfake detection runs on Morpheus 2.0. The system achieved perfect detection performance with 2100 videos that contained eight different manipulation methods on the Meta DFDC dataset. The system achieved 99.6% accuracy when tested with more than 100000 assets. It evaluates current generative tools by testing GetImg and Dream, which produced an 89.01% score that better represents current attack methods than traditional datasets.

Facia’s customer onboarding solution combines liveness detection, deepfake analysis, and photo ID matching in a single flow. Liveness completes in under one second with a false acceptance rate of 1 in 100,000,000 and a false rejection rate below 1%. iBeta Level 2 certification , from iBeta Quality Assurance, a NIST/NVLAP-accredited independent biometric testing lab , records 0% APCER across 56+ spoofing attack types on both Android and iOS. For workflows where a live camera isn’t practical, single-image liveness delivers 98.8% accuracy from a single static frame.

For API-level injection, Facia‘s client-side SDK secures the camera feed at the device level before data enters the verification pipeline. Server-side liveness is blind to upstream substitutions. The SDK removes that blind spot. On documents, AI image detection scans submitted IDs for synthetic generation signatures, rendering artifacts, metadata mismatches, and lighting inconsistencies, catching AI-fabricated documents before they reach face matching.

The platform integrates via REST API and native SDK for Android and iOS. Teams evaluating current stack coverage against these attack vectors can test Facia’s deepfake detection API for KYC directly.

Book a demo to see how Facia addresses these vectors in a live environment.

Frequently Asked Questions

What makes KYC onboarding specifically vulnerable to deepfake attacks, and how do fraudsters exploit it?

KYC onboarding relies on digital inputs like selfies and document uploads, which can be easily generated or manipulated using AI. Fraudsters exploit this through synthetic identities, deepfake video feeds, and API-level data injection to bypass verification checks.

How is a biometric injection attack different from a standard deepfake presentation attack during KYC?

Presentation attacks manipulate what the camera sees using deepfake videos, targeting liveness detection. Injection attacks bypass the camera entirely by altering data at the API level, making them invisible to standard liveness checks.

Are there regulatory requirements that specifically address deepfake risks in KYC?

Yes, regulations like FinCEN guidelines, NIST SP 800-63B, and the EU AI Act require stronger identity verification and liveness controls. Standards like CEN/TS 18099 also introduce requirements for detecting injection attacks in KYC systems.

Published
Categorized as Blog