• Home
  • Blog
  • AI & Legislation – The Landmark European AI Act
Blog 26 Mar 2025
EU PASSES SIGNIFICANT AI ACT TO REGULATE AI SECTOR

AI & Legislation – The Landmark European AI Act

Author: admin | 26 Mar 2025

Recent years have seen significant development in the field of AI Tech with the rapid development of highly sophisticated LLMs and creative audio/visual softwares, impacting day to day personal and professional lives, industry standards and administrative roles. We now live in an age where AI generated content can make or break narratives, public opinion and perceptions in a matter of seconds! 

With such rapid evolution of AI technology it was only a matter of time before administrations, legislations and regulations caught up with the tech and regulated the technology possessing unprecedented high impact capabilities. In this race of legislation efforts, the European Union once again proved to be the world standard with its introduction and enactment of the Artificial Intelligence (AI) Act. The EU AI Act, being the first of its kind legislation, identifies and regulates the reasonable standards for the development, deployment and consumption of AI tech. The scope of the act ranges from administration bodies and development firms, all the way to the everyday layman end user of the tech. The EU AI Act has supposedly been developed carefully striking a perfect balance between the protection of fundamental rights , encouraging innovation and overall fairness.

What is the AI Act of Europe? First of its kind AI Regulation

Introduced in 2021 by the European Commission and enacted in August 2024 by the European Parliament, The EU AI Act is the first of its kind exhaustive regulatory framework aimed at governing AI technology. The act aims to promote ethical use of AI in Europe while ensuring fairness and fostering innovation. 

The EU AI Act regulates high-risk AI applications ensuring transparency, accountability and data protection, building itself in compliance with the set principles of GDPR while imposing strict requirements on developers for the protection of end users of AI technologies. The act is significant not only as the global leader in regulating AI tech but also sets the global benchmark for AI safety and ethics. 

Risk Based Classification of AI systems : From Minimal to Unacceptable Risk

RISK BASED CLASSIFICATION APPROACH OF SYSTEMS UNDER EU AI ACT

The act is unique as it deploys a risk-based approach; categorizing AI systems based on their potential harm to society.

Article 6 of the EU AI Act classifies AI systems into four primary risk categories:

  • Unacceptable risk: This category outright bans the development and deployment of certain AI applications such as social scoring systems, “real time” remote AI-powered biometric surveillance systems in publicly accessible spaces and Autonomous weapons systems that are able to make lethal decisions without human intervention. 
  • High-risk AI: This category includes AI systems used in Healthcare , transportation , Law enforcement and recruitment services. Systems used for medical diagnosis, autonomous vehicles , predictive policing or facial recognition in public spaces and AI-driven algorithms used in recruitment and hiring services must adhere to strict compliance requirements. 
  • Limited-risk AI: This includes Artificial intelligence powered chatbots such as siri/alexa & AI powered content/advertising recommendation algorithms. 
  • Minimal-risk AI: Includes AI-powered spam filters for emails, NPCs in video games or systems used in basic content moderation. 

This risk-based approach used in the EU AI Act aims to make compliance a fairly easy to navigate task while maintaining stringent checks where necessary. 

Key Compliance Requirements: What Businesses Need to Know

IMPORTANT COMPLIANCE PROVISIONS OF EACH CLASSIFICATION SYSTEM

The Act specifies compliance requirements for each of the risk categories exhaustively and understanding these requirements might save you and your business from unnecessary violations and fines. 

Article 5 – Prohibited AI Practices: AI systems specified in the Unacceptable risk category fall under this article of the act and are banned for development and deployment in accordance with public safety and fundamental rights. 

Article 8-11, 16-19, 52 & Article 61 outline the compliance requirements for systems categorized as High-Risk, specifically the requirements related to risk management, documentation, transparency, data governance and human oversight. These systems must ensure that AI models are trained on high-quality and unbiased data with providers keeping detailed records of the AI systems’ design and operation for understanding how the model is concluding its decisions. 

Furthermore, the design of such systems such as the high stake ones used in healthcare or law enforcement must allow reasonable human oversight and leave room for human intervention making sure post market oversight allows corrective measures and performance evaluation. Regular independent auditing by the providers of such systems is also a compliance requirement to ensure safety and fairness standards are met. 

Article 59 and 52 specify the compliance requirements for Limited-Risk AI systems. This includes but is not limited to clear labelling of AI-influenced/generated content informing users that their experience is influenced by AI. Moreover, users are given control over their interactions by making it necessary for systems to provide users with opt-out options for AI-driven content recommendations. 

As far as minimal-risk systems are concerned, although they do not face any significant obligations, they still must comply with the basic principles of fairness, transparency and respect for privacy as outlined in Article 5 – General principles of the EU AI Act. Additionally Recital 16 provides further clarification as to why Minimal-Risk AI systems do not require additional regulation. 

The outlined compliance requirements for each of the risk categories are based on the potential harm they pose to individual and broader societal rights. The requirements are kept aligned with basic human rights and the principles of General Data Protection Rights (GDPR) as specifically outlined in Article 60 of the EU AI Act mandating all personal data used in AI systems is processed legally and fairly while maintaining transparency. 

Article 52 which outlines transparency is also a significant development in the world of AI regulations as this article constitutes transparency and labelling of all AI generated content ensuring prevention of misinformation and impersonation. The article makes it necessary for AI systems to notify users when they are interacting with AI generated content. Such provisions are not only effective in preventing misinformation but also highly combative in nature to growing threats of deep-fakes and fake news which have seen a surge in recent years. 

Penalties for Non-Compliance : Fines, Bans, and Legal Consequences 

It is widely believed that a law without punishment is merely good advice. One must wonder why anyone would feel the need to comply with the newly enacted European AI Act when the development of the technology was growing rapidly and “just fine” in the absence of such regulation. Keeping the act in check with this dilemma, the EU AI Act imposes significant consequences for non-compliance and breach. 

Article 71 of the act outlines the penalties for noncompliance with the provisions of the act. National competent authorities will have enforcement powers with the capacity to impose significant fines depending on the level of noncompliance. Fines can be as high as €35 million or 7% of the global turnover whichever is higher. However, fines for some offences can be as low as €7.5 million or 1% of the global turnover and some offences such as development and deployment of unacceptable risk systems can lead to maximum fines and outright ban on the involved firm.

Challenges to the AI Act: Criticisms and Implementation Hurdles

The introduction and enactment of the EU AI Act faced widespread criticism by  European and non-European tech companies alike. In 2023 , over 150 executives from prominent European Firms expressed their concerns over the new legislation to the policymakers specifically arguing that the strict compliance requirements impose unfair costs and pose liabilities risk, which will force major tech companies to move out of the EU as part of a business-centric decision. 

Moreover, major US Tech companies with the backing of the current administration of the Oval Office have opposed the EU’s regulatory approach contending that the act’s provisions will “impede innovation” and “unfairly disadvantage non-European tech giants”. 

While the act aims to foster and encourage innovation by its Article 55 – Regulatory Sandboxes which allows companies to test AI systems in controlled environments under regulatory oversight, the strict compliance requirements and significant fines raise speculations and concerns from technology firms all across the globe.  

The Future of AI Regulation: How the EU’s AI Act Could Shape Global Policies 

EU member states are obligated to establish individual National competent authorities and bodies to ensure enforcement of the act and compliance monitoring under Article 58 of the act, indicating towards a future which will require all member states to enforce and adhere to strict compliance with the act across the board. 

Similarly, other nations have moved efforts towards regulating AI such as:

  •  UK’s “Ofcom’s Draft Guidance on Online Safety” , 
  • China’s enacted “Provisions on the Administration of Deep Synthesis of Internet Information Services” ,
  •  USA’s proposed “AI LABELING ACT” & “TAKE IT DOWN Act” and 
  • Spain’s proposedLaw for the Proper Use and Governance of Artificial Intelligence” 

While still to be enacted or enforced these are examples of some legislative efforts intended to regulate the Artificial intelligence tech and governance mechanism as the EU’s AI Act proposedly managed to do so. 

One might also argue that the Recital 8 of the European Union’s AI Act act presents itself rightfully as the global benchmark in regulating the AI development and space focusing on safety, transparency, human rights and accountability, encouraging the rest of the world to follow its suit. 

Key Takeaways for Facial Recognition Systems: 

1. Mitigation of Bias in High-Risk AI Systems: 

Facial biometric systems are classified as High-Risk AI in many use cases. 

  • Developers must ensure training data is “representative and free of bias” (Article 10)
  • Human oversight must be maintained to detect and correct discriminatory outputs (Article 17)

2. Fundamental Rights Impact Assessments: 

  • Annex III mandates “fundamental rights impact assessments” to evaluate potential biases in biometric identification systems.

3. Pre- and Post- Market Bias Audits:

  • Mandatory bias audits must be conducted before and after deployment.
  • Transparent documentation of model performance across different demographics is required. 

4. Human Review Mechanisms: 

  • “Human Review Mechanisms” must be implemented for high-stakes decisions involving facial biometric tools.

5. Detection & Labelling of AI Generated Content:

  • Article 52 imposes requirements to detect and label all AI generated content encouraging building codes to ensure transparency & detection.

Conclusion: 

As enforcement of the European Artificial Intelligence Act approaches fast making compliance an operational necessity, tools such as FACIA will be required to accurately detect AI-Generated content and help social media companies label it accordingly as per the requirement of Article 52.

FACIA’s biometric verification and liveness detection tech meets the EU AI Act’s critical requirements. The technology is built on the core principle of delivering bias-free verification through training  representative datasets (Article 10)  and maintaining required human oversight (Article 17). For content and social media platforms, it provides deepfake detection capabilities enabling compliance with Article 52’s synthetic content labeling requirements which is important to combat misinformation generated by synthetic media. 

With proactive anti-spoofing technology and comprehensive bias mitigation, FACIA offers identity verification solutions that balance regulatory requirements with real-world usability.

Designed for inclusivity across all demographics, its continuously improved platform combines regulatory compliance with bias-free and market leading performance.

Frequently Asked Questions

What is the EU AI Act 2025?

The EU AI Act is the world’s first comprehensive AI regulation, classifying AI systems by risk level and setting strict requirements for high-risk applications like facial recognition. It focuses on safety, transparency and fundamental rights protection.

What is prohibited under the EU AI Act?

Banned AI practices include “manipulative subliminal techniques”, social scoring and real-time biometric identification in public spaces (with narrow exceptions). Emotion recognition in workplaces/schools and predictive policing based solely on AI are also prohibited.

What is the difference between GDPR and the AI Act?

GDPR regulates personal data processing, while the AI Act governs AI system development/deployment. The AII Act complements GDPR by addressing AI-specific risks like bias and transparency, not just data privacy. Both apply overlapping requirements for high-risk AI using personal data.

Published
Categorized as Blog