Blog 13 Mar 2026

Try Now

Get 10 FREE credits by signing up on our portal today.

Sign Up
Deepfake Defense

What Most Liveness Vendors Get Wrong About Deepfake Defense

Author: admin | 13 Mar 2026

For KYC and identity verification teams, the liveness check has been the last line of defense against fraudulent onboarding. That position is under pressure. Deepfake attacks on liveness detection have outpaced the standards most vendors were built around ,  and the organizations carrying the risk are relying on certifications that don’t reflect the actual threat.

Passing a certification test and defending against a real attack are two different things. Most liveness vendors have confused the two,  and that confusion is now showing up in fraud losses.

The identity verification market spent the last decade optimizing for one threat: someone holding a photo or wearing a mask in front of a camera. Certifications were built around it. Presentation attacks became a solved problem.

Attackers moved on. They bypass the camera entirely,  injecting synthetic video into verification pipelines using generative models; no PAD system was built to catch. The attack surface shifted. Most liveness vendors are still defending the old one. According to Gartner, by 2026, 30% of enterprises will no longer consider standard identity verification solutions reliable in isolation.

Why PAD Certification Doesn’t Cover Deepfake Attacks on Liveness Detection

The presentation attack detection system testing requirements are established by ISO/IEC 30107-3, which serves as the foundation for iBeta Level 1 and Level 2 certification. The system was designed to test whether a liveness solution could differentiate between a real human face and three specific kinds of physical objects, which included a photograph, a mask, a screen replay, or a 3D model.

The system was developed for multiple purposes, but it does not provide testing facilities for injection attacks. The system does not have the capability to validate its performance against deepfake content created by generative artificial intelligence. Testing protocols for attack categories that were not present at the time of standard development still need to be updated to align with current security threats.

Most liveness vendors use their iBeta certification as their primary proof of their ability to detect deepfake technology. Buyers see iBeta Level 2 compliant and reasonably assume the system has been tested against the attacks making headlines. The system requires more testing because it has not yet achieved this status. The certification confirms the system stops someone from holding a mask to a camera. The document does not explain the system’s response to an attacker who uses a synthetic face to compromise it.

ISO 30107-3 does not address the requirements that CEN/TS 18099, the European technical specification, establishes for Injection Attack Detection. A forthcoming ISO 25456 will formalize global testing procedures for injection-resistant systems. The industry is acknowledging the gap. Most vendors haven’t filled it yet.

How to Evaluate Your Liveness Vendor: Five Questions Worth Asking

Identity verification and KYC teams need to develop more effective questioning methods to select their liveness detection vendors. The correct security questions help determine whether a solution provides actual protection against present-day fraudulent activities.

The 5 Questions to ask your liveness

  • Is your iBeta Level 2 certification specific to presentation attacks,  and what does it cover for injection attacks? Presentation attack detection and injection attack detection are tested under different standards. A vendor who can’t distinguish between the two hasn’t built for both.
  • How does your system detect virtual camera software at the pipeline level? A server-side liveness solution receives a video stream that it cannot verify as genuine. Client-side, SDK-based deployment closes that exposure architecturally. If a vendor can’t explain where injection is caught, it probably isn’t.
  • When was your deepfake detection model last retrained, and against which tools? Generative models used in fraud operations change constantly. A detection model trained on last year’s attack data carries blind spots against tools released since then. Ask for dates and tool names,  not a general statement that retraining happens.
  • Can you provide real-world accuracy results,  not just lab benchmarks? Closed-dataset testing doesn’t reflect field performance. Deepfake scams targeting identity verification rarely use the same tools vendors test against internally. Published cross-dataset results are the only meaningful performance signal.
  • What is your published false acceptance rate under independent testing? FAR figures show how often the system passes what it shouldn’t. An independently verified FAR,  not a self-reported one,  is the only number worth using for liveness vendor comparison. Any vendor unwilling to publish it is telling you something.

How Deepfake Scams Exploit the Identity Verification Gap

The attackers use virtual camera software to inject synthetic or pre-recorded video into identity verification systems, which allows them to bypass security measures that use cameras. The fake stream appears real, allowing deepfakes to bypass liveness detection and biometric security checks.

Where liveness detection

Deepfake fraud has developed into a worldwide cybersecurity threat that now endangers organizations. The H2 2025 report from TransUnion states that companies suffered $534 billion in fraud losses during that time period. Criminal tools are cheap: synthetic identities cost $15, deepfake images cost between $10 and $50, and face-swap software $1 000 monthly. The Deepfake-as-a-Service platform creates new challenges for identity verification systems because it enables cybercriminals to develop innovative methods for their illegal activities.   

Standard PAD systems, tested only against physical presentation attacks, cannot detect injection at the pipeline level. The certification design only tests specific threats, which do not include the current attacker.

The Lab Accuracy Problem Liveness Vendors Don’t Talk About

The second gap, which receives insufficient scrutiny, exists because detection systems reach different performance levels in their controlled tests compared to their actual defense against real-world security threats.

The ScienceDirect 2024 study showed that detection accuracy decreases by 50% when researchers test closed laboratory models on actual deepfake material. The testing showed that invisible changes to synthetic media, which serve as adversarial perturbations, caused a detection accuracy decline of 40% in cross-dataset testing.

Detection systems rely on patterns learned from their training data. When fraudsters create deepfakes using new generative AI models, those known artifacts disappear. Without familiar signatures, the liveness or detection system treats the synthetic face as real. This limitation allows advanced deepfake attacks to bypass identity verification and biometric security checks.

Vendors who don’t continuously retrain their models against emerging generative tools are essentially selling a static defense against a dynamic attack. The certification was accurate at the time of testing. The real-world performance may be significantly weaker by the time an attacker shows up.

This is not a minor caveat. It is a fundamental architectural problem with how most liveness deepfake fraud prevention tools are built and maintained.

What Genuine Deepfake Fraud Prevention Tools Actually Require

The gap between what most liveness vendors offer and what genuine deepfake fraud prevention tools actually require comes down to three things.

Injection attack detection is built into the pipeline: PAD alone is insufficient for any organization operating a remote verification flow. The system needs to verify not just whether the face appears live but whether the video stream itself is authentic ,  analyzing for the forensic signatures of virtual camera software, synthetic media, and pipeline manipulation. These are architecturally separate detection problems requiring separate detection layers.

Continuous adversarial retraining: A detection model is only as current as the attack data it was last trained on. Against a threat landscape where new generative models appear constantly, and attack tools are sold as a service on criminal markets, static models degrade. Genuine deepfake defense requires a live training pipeline,  one that ingests new attack patterns and updates detection logic on an ongoing basis, not just at certification intervals.

Real-world testing, not just lab benchmarks: The third gap is rarely discussed but arguably the most operationally significant: lab benchmarks and real-world performance are not the same number, and most vendors only publish one of them. Accuracy claims derived from closed datasets don’t hold when confronted with generative models that the system was never trained on. Any vendor serious about deepfake fraud prevention should be able to show cross-dataset generalization results and demonstrate performance against adversarial attack variants,  not just the synthetic faces their own researchers generated for internal testing.

The organizations buying liveness solutions deserve to know which of these three capabilities they’re actually getting. 

Liveness Vendors’ Deepfake Vulnerabilities Start With the Wrong Question

The identity verification industry has been asking, “Can this system detect a fake face?” The question that needed to be asked is “Can this system detect a fake video stream?” The gap between those two questions is where fraud is happening right now,  against organizations that believe they’re protected because their vendor passed a certification designed for a different era.

Facia‘s 3D Liveness Detection addresses both layers: iBeta Level 2 on presentation attacks, SDK-based client-side deployment on injection attacks, and removing server-side pipeline exposure architecturally. A false acceptance rate of 1 in 100 million leaves no meaningful margin for error. Deepfake Detection adds a dedicated media authentication layer built on a continuously retrained algorithm ,  tested against current-generation generative AI tools with published accuracy results that most vendors simply cannot produce, achieving 89.91% real-world detection accuracy. Single Image Liveness delivers up to 98.8% accuracy on a single real-world image, not a lab sample.

The right question to ask any liveness vendor isn’t whether they’re certified. It’s what they’re certified against,  and whether their accuracy holds when the attackers change tools.

See how Facia’s liveness detection stops deepfake attacks in real identity verification flows. Schedule a demo today.

Frequently Asked Questions

Why are accuracy claims from liveness vendors misleading?

Accuracy claims are often based on controlled lab tests rather than real-world attack scenarios. They may also ignore sophisticated spoofing methods, making the reported performance appear better than it actually is.

Why do liveness vendors struggle with advanced identity fraud?

Advanced fraud uses techniques like deepfakes, high-quality masks, or injected video streams that bypass basic liveness checks. Many vendors rely on limited detection models that cannot keep up with rapidly evolving fraud tactics.

How can businesses verify a liveness vendor's security claims?

Businesses should request independent third-party testing results and certifications. They can also conduct their own penetration tests or pilot programs using real fraud scenarios to evaluate effectiveness.

Published
Categorized as Blog