• Home
  • News
  • Denmark Plans to Ban Deepfake Content Fearing Misinformation
denmark

Denmark Plans to Ban Deepfake Content Fearing Misinformation

Author: teresa_myers | 28 Apr 2025

Denmark is proposing a new law that could serve as a model for other European countries. As deepfake videos created by AI have become increasingly realistic and more accessible than ever, governments are now required to take stringent action against them. 

The Danish government has announced plans to ban the online posting of AI-generated Deepfake content, as reported by the Danish News publishers, The Local DK.

Denmark’s Culture Minister, Jakob Engel-Schmidt, stated that immediate action is necessary for new laws. He stated that current laws do not stop deepfakes and synthetic media. These technologies threaten Denmark’s democratic institutions and people’s rights.

Engel-Schmidt explained how false video content presents major threats through examples of politicians’ statements, such as leaving NATO.

 “Legislation here is first and foremost about protecting democracy and ensuring that it’s not possible to spread deepfake videos where people say things they’d never dream of saying in reality.”

Denmark plans to amend its copyright laws to grant individuals control over their voice, face, and likeness. Tech platforms will have to remove unauthorized deepfakes. However, marked satirical content will still be legal. This change could enhance AI media regulation in Europe, reduce the misuse of personal data, and establish new standards for platform accountability. It shows that the government will take stronger action against online misinformation.

The Law establishes EU-wide regulations for AI and digital media whilst maintaining a system for enforcing compliance among major technology companies. In 2023, China implemented new regulations that mandate watermarks and author identification for AI-generated content to address misinformation issues. 

The implementation of this law faces major hurdles because AI technology continues to improve at creating deepfakes. Detecting manipulated content is intrinsically difficult, which makes enforcement complex. Any successful regulation of deepfake media will require the integration of advanced detection technologies to identify and remove unauthorized synthetic content from online platforms effectively.
Facial recognition technology (FRT), aided by deepfake detection tools, can assist in authenticating real identities and flagging deepfakes, thereby reinforcing content integrity online. 

More News: Head of Finance Scammed in $499K Deepfake Video Scam