What is the EU AI Act and who does it apply to?

What is the EU AI Act and who does it apply to?

🧩 Overview

The EU Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive law regulating the development, deployment, and use of Artificial Intelligence within the European Union.
Its aim is to ensure that AI systems placed on the EU market are safe, transparent, and respect fundamental rights.


⚖️ Scope and Applicability

The AI Act applies to:

  • Providers of AI systems (e.g., developers or vendors).

  • Deployers (e.g., organizations implementing AI internally).

  • Distributors and Importers who place AI systems on the EU market, even if located outside the EU.

Any organization offering AI solutions to EU citizens or operating within the EU must comply, regardless of headquarters location.


⚙️ Risk-Based Classification

The Act uses a four-tier risk model to determine the level of regulatory obligation:

Risk LevelExamplesRequirements
Prohibited AISocial scoring, real-time biometric surveillanceBanned in EU
High RiskRecruitment, credit scoring, safety systemsStrict compliance – documentation, risk controls, CE marking
Limited RiskChatbots, AI assistantsTransparency notices required
Minimal RiskSpam filters, game AINo obligation

🧾 Key Obligations for High-Risk AI

Organizations offering High-Risk AI must:

  • Implement a risk management system.

  • Ensure data quality and bias mitigation.

  • Maintain technical documentation and logs.

  • Enable human oversight throughout the AI lifecycle.

  • Affix the CE mark before placing the AI system on the market.


🧮 Penalties for Non-Compliance

Non-compliance may result in fines up to €35 million or 7% of global annual turnover, whichever is higher.
The EU AI Act is expected to apply from mid-2025, with High-Risk requirements enforced gradually.


🧩 Practical Steps for UK Organizations

  1. Perform an AI risk inventory of all systems in use.

  2. Identify which fall under High-Risk categories.

  3. Align your governance with ISO/IEC 42001 and NIST AI RMF.

  4. Create evidence of compliance (audit logs, impact assessments, policies).

  5. Appoint an AI Compliance Lead or Responsible AI Officer.



🗂️ Meta

Created by: Zen AI Governance UK Ltd
Last Updated: Nov 2025
Reading Time: 4 min

    • Related Articles

    • Obligations for High-Risk AI Systems (EU/UK aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Obligations for High-Risk AI Systems (EU/UK aligned) EU AI Act Compliance Regulatory Knowledge EU/UK aligned + On this page On this page Scope & ...
    • Unified Risk Register Template — ISO 42001 + NIST + EU AI Act Integration

      Zen AI Governance — Knowledge Base • ISO/NIST Integration • Updated 18 Nov 2025 www.zenaigovernance.com ↗ Unified Risk Register Template — ISO 42001 + NIST + EU AI Act Integration ISO/NIST Integration Playbook Unified Risk Register + On this page On ...
    • Post-Market Monitoring & Serious Incident Management — Continuous Compliance and Reporting

      Zen AI Governance — Knowledge Base • EU AI Act Compliance • Updated 17 Nov 2025 www.zenaigovernance.com ↗ Post-Market Monitoring & Serious Incident Management EU AI Act Compliance Post-Market Monitoring + On this page On this page Purpose & ...
    • Risk Management System (EU/UK aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Risk Management System (EU/UK aligned) EU AI Act Compliance Regulatory Knowledge EU/UK aligned + On this page On this page Purpose & principles ...
    • Security Architecture for AI Systems — Risk Management

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Security Architecture for AI Systems EU AI Act Compliance Risk Management EU/UK aligned + On this page On this page Threats & scenarios Boundaries & ...