• Home
  • Press Release
17 Sep 2025

Try Now

Get 10 FREE credits by signing up on our portal today.

Sign Up

EU: Artificial Intelligence Act

Author: Carter H | 17 Sep 2025

1. Overview

The EU’s Artificial Intelligence Act, the first artificial intelligence law passed by the European Union, became operative in April 2024. It seeks to improve safety and explainability in the EU by creating a risk-based regulatory framework for AI systems, including deepfakes.

2. The scope of the law

Given that deepfake technology will most likely be classified as high-risk, it will be regulated tightly, particularly if it is utilized to disseminate content or misinformation without authorization.  The AI Act, which separates AI systems into four risk types—unacceptable, high, limited, and low—oversees AI systems employed in the EU, irrespective of the location of the provider.

3. Key Provisions

  • Transparency Requirements: Artificial intelligence systems generating or altering content are required to distinctly mark their artificial nature for registration.
  • Deepfake technology used in high-risk applications, for example, elections or processing personal data, is deemed high-risk and falls under stringent controls. Prohibited Uses: Prohibited AI uses include those that manipulate human behavior vulnerabilities.

4. Penalties & Enforcement

There are severe penalties for violation of the AI Act: up to €15 million, or 3% turnover, for other violations, and up to €35 million, or 7% of worldwide annual turnover, for listed forbidden practices. National competent authorities have enforcement authority, with the European Commission having overall responsibility.

5. Material Cases or Precedents

There is no precedent or case under this rule to date in April 2025 because the AI Act has not yet gone fully into operation. However, the enactment of the Act is a big stride towards solving the problems presented by deepfake technology.

6. Comparison to Global Standards

Relative to the other jurisdictions, the EU AI Act is wider-ranging. While as extensive as the EU Act is in its regulatory framework for a range of AI applications and the risks that flow from them, there are other such jurisdictions as the UK and Australia, that have legislation against non-consensual deepfakes.

7. Practical Implications

Developers and deployers of AI systems used for creating deepfakes must ensure adherence to risk analysis and transparency requirements. Organizations need to institute policies that can identify and discourage AI-material misuse, especially in areas that are sensitive, like personal data processing and elections.

8. Future Outlook

With temporary measures for certain AI systems, the AI Act will go fully into effect in August 2026. The EU is currently negotiating how the regulation can further be improved, specifically with regard to the classification of AI systems and how transparency obligations apply.