• Home
  • News
  • UK Universities Experiencing AI Deepfake Fraud in Admissions Process
UK UNIVERSITIES BATTLING WITH AI DEEPFAKE FRAUD IN ADMISSION PROCESS.

UK Universities Experiencing AI Deepfake Fraud in Admissions Process

Author: teresa_myers | 14 Feb 2025

Universities in the United Kingdom are experiencing AI deepfake fraud during the admission process. University students are being manipulated due to AI-generated deepfake systems to fool automated interview systems. Recent studies from software forms have revealed up to 30 cases of deepfake usage and 20,000 interviews conducted in January, indicating 0.15% of total applicants. Phoebe O’Donnell, Enroly’s head of services claims that some applicants utilized the latest tricks, for instance, using fake faces while using their natural expressions and movements. Though the number of events is small, such methods highlight issues regarding the university admissions’ integrity. 

Furthermore, this problem is specific to UK universities, which must follow strict regulations for international students. Where visa refusal rates are above 10 percent, student sponsorship licenses can be withdrawn. Automated interview systems are used by most universities to evaluate applicants before issuing Confirmation of Acceptance for Studies (CAS) documents, an important prerequisite for student visas.

Also, Enroly reported that 1.3% of all interviews revealed deception signs, including mocking and off-camera help. To fight such fraud attempts, the company must have reinforced safety estimations by using facial recognition, passport matching, and immediate checking systems. 

Furthermore, the emergence of deepfake fraud in university admissions shows the huge problems in identity verification in different sectors. Even the WPP’s chief executive was targeted in 2023 with the AI-driven voice and deepfake injection during a video call, which shows the growing advancement of fake media. 

To fight against such attacks, biometric liveness detection is rapidly becoming a need to mitigate such risks and physiological signs, for instance, blink, and facial micro-expressions to confirm accurate human presence. However, Enroly is still involved in close coordination with educational institutions for better fraud detection capabilities—-AI-based manipulation techniques becoming continuously experienced.