The EU Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive law regulating the development, deployment, and use of Artificial Intelligence within the European Union.
Its aim is to ensure that AI systems placed on the EU market are safe, transparent, and respect fundamental rights.
The AI Act applies to:
Providers of AI systems (e.g., developers or vendors).
Deployers (e.g., organizations implementing AI internally).
Distributors and Importers who place AI systems on the EU market, even if located outside the EU.
Any organization offering AI solutions to EU citizens or operating within the EU must comply, regardless of headquarters location.
The Act uses a four-tier risk model to determine the level of regulatory obligation:
| Risk Level | Examples | Requirements |
|---|---|---|
| Prohibited AI | Social scoring, real-time biometric surveillance | Banned in EU |
| High Risk | Recruitment, credit scoring, safety systems | Strict compliance – documentation, risk controls, CE marking |
| Limited Risk | Chatbots, AI assistants | Transparency notices required |
| Minimal Risk | Spam filters, game AI | No obligation |
Organizations offering High-Risk AI must:
Implement a risk management system.
Ensure data quality and bias mitigation.
Maintain technical documentation and logs.
Enable human oversight throughout the AI lifecycle.
Affix the CE mark before placing the AI system on the market.
Non-compliance may result in fines up to €35 million or 7% of global annual turnover, whichever is higher.
The EU AI Act is expected to apply from mid-2025, with High-Risk requirements enforced gradually.
Perform an AI risk inventory of all systems in use.
Identify which fall under High-Risk categories.
Align your governance with ISO/IEC 42001 and NIST AI RMF.
Create evidence of compliance (audit logs, impact assessments, policies).
Appoint an AI Compliance Lead or Responsible AI Officer.
Created by: Zen AI Governance UK Ltd
Last Updated: Nov 2025
Reading Time: 4 min