News 22 Oct 2024

Buyers Guide

Complete playbook to understand liveness detection industry

Learn More
META SETS TO SECURE THEIR USER PRIVACY BY USING FACIAL RECOGNITION SYSTEM

Meta Adds Facial Recognition to Combat Fake Celeb Scams

Author: teresa_myers | 22 Oct 2024

Meta–the parent company of Facebook, taking action against fake celebrity scams through facial recognition technology. This technology will manage the celeb-bait ads—fraud that employ celebrity deepfakes to scam people. However, these misleading ads often feature celebrities and trick users into clicking on fake websites that lead to scams. The main purpose behind this process is to steal user’s personal information or get money from the trustful victims. 

Meta is executing the new strategy that includes images comparing uses in ads alongside real images from celeb’s real Facebook or Instagram accounts. The ad will be immediately blocked if Meta finds and confirms it is a scam. Moreover, the company didn’t share the correct details about how common these frauds are on Instagram or Facebook, but this is an important step to improve the user’s safety. 

Meta Stabilizes AI and Privacy Issues

3.3 billion people are actively using the Meta daily, so AI will resolve their privacy concerns and manage non-compliance. A facial recognition system will assist the Meta in managing a huge spam report volume, reducing accidental issues like account suspension unintentionally. Furthermore, the use of an AI celeb scam detection system will block the scams that use fake celebrity images. The company is also evaluating facial recognition technology to help users by locking out their accounts by permitting video selfies for identity verification purposes. 

Certainly, the company ensures that the user’s data will be secure during the comparison and removed after contrasting but facial recognition will be debatable. Meta has encountered several legal hurdles, including a $1.4 billion settlement in Texas and a $650 million payout in Illinois, both stemming from allegations of profiting from facial recognition technology without obtaining user consent. Consequently, the new video selfie feature will not be trialed in these states. This approach by Meta illustrates its continuous efforts to navigate the complexities of user protection and privacy in the age of AI.

Read More: Deepfake Scams Continuously Hit 20% of Businesses and 36% of Australians