Global Strive Growth: UB’s Deepfake-O-Meter and Rule’s Update
Author: admin | 12 Sep 2024In This Post
The emergence of deepfake technology has become the cause of growing concerns globally that target students, governments, and other businesses. It requires urgent action to fight against such AI-generated media exploitation threats. Deepfake-O-meter is now considered one of the most advanced tools to resolve these issues and this tool has been developed at the University of Buffalo by researchers. The main purpose of this tool is to modify the deepfake detection access by providing them to the public to submit their videos or images and check them for any exploitation signs. After performing step one, users will get the results within seconds giving them the confidence to the individuals to defend themselves against wrong information and artificially-generated media.
The Deepfake-O-Meter detection systems are being used globally right after its release and getting up to 6,300 uploadings. This system has been utilized to check the deepfakes in well-known examples, for instance, a fabricated robocall associated with President Joe Biden. Having free access to the latest detection algorithms, the tool works as a connection that fills the gap between researchers and the public giving the resources to people to fight against online misinformation.
Government, Businesses, and Researchers Combines Against Deepfakes
High authorities like the government, businesses, and researchers are putting extreme efforts into fighting against the rapid deepfake growth. However, Singapore is one of the leading countries that passed the new law, the Elections (Integrity of Online Advertising) (Amendment) bill, recently approved in September 2024. The main purpose of this bill is to ban AI-produced content that can falsely show political deepfakes to maintain the election’s honesty and voter’s confidence. There are many business sectors, for instance, companies like AuthID are uploading reports on the deepfake risks for various industries, such as financial services, and highlighting the need for biometric safety estimations.
Besides, researchers from Hong Kong and Macau have made massive progress by winning global deepfake detection challenges while emphasizing continuous progress in the complex field. Experts warn that if deepfake becomes more advanced, AI attacks can easily target more than one video, and audio while demanding continuous vigilance to defend online spaces.