Obligations for High-Risk AI Systems — Lifecycle Overview & Requirements
Obligations for High-Risk AI Systems (EU/UK Aligned)
EU AI Act Compliance High-Risk Systems
+ On this page
Key takeaways
- High-risk AI systems must implement a continuous Risk Management System (RMS) and record-keeping as per Annex IV.
- Data quality, bias mitigation, and human oversight are mandatory and auditable controls.
- Systems require CE marking or UK conformity assessment before deployment.
Classification & criteria
- AI systems fall under Annex III categories (e.g., biometrics, education, employment, credit scoring, public services).
- Classification requires formal risk assessment and mapping to use-case context per EU AI Act Article 6.
- Zen AI Governance maintains a Risk Catalogue documenting all use cases and their risk status (EV-IDs).
Risk-management system (RMS)
The RMS integrates ISO 42001 and NIST AI RMF methodologies to control technical and ethical risks throughout the AI lifecycle.
| Phase | Risk Activities | Outputs & Evidence |
|---|
| Design | Risk identification, impact analysis, mitigation plan | RMS Template (RM-ID) |
| Development | Control implementation, validation tests | Test Logs + Bias Report |
| Deployment | Residual risk approval, oversight confirmation | Board Sign-off (EV-ID) |
| Operation | Incident monitoring, continuous improvement | CAPA Log + PMM Metrics |
Data & dataset governance
- Comply with Article 10 (EU AI Act): datasets must be relevant, representative, error-free, and free from bias.
- Maintain Data Sheet for Datasets listing source, composition, licensing, and ethics review date.
- Perform bias testing pre- and post-deployment with documented fairness metrics (Gini, TPR gap, EO diff).
Technical documentation (Annex IV)
- Provide clear instructions for use, limitations, and required human oversight per Article 13.
- Include visible disclosure: “This system uses AI for decision support under oversight by Zen AI Governance.”
- Maintain User Manual & Transparency Notice (Annex IV Section 2.6).
Human oversight & control
- Assign named Oversight Officer for each high-risk system.
- Define intervention thresholds and rollback mechanisms.
- Implement interfaces for manual override and incident logging.
Accuracy & robustness
- Define minimum accuracy requirements and error tolerances in TDF.
- Conduct adversarial and stress testing before release and after updates.
- Log model performance continuously and review monthly.
Post-market monitoring & reporting
- Implement Post-Market Monitoring Plan (PMMP) covering data collection, incident classification, and KPIs.
- Report serious incidents to authorities within 15 days (Article 62).
- Update risk management and CAPA logs after incident closure.
- Annual PMM summary submitted to AI Governance Board and Notified Body.
Implementation checklist
- Risk Management System active and linked to ISO 42001 AIMS.
- Technical Documentation complete and Annex IV aligned.
- Transparency notices published for users and clients.
- Human oversight roles assigned and training complete.
- CE mark granted or UK conformity file on record.
- Post-Market Monitoring dashboard live and audited.
© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 17 Nov 2025 • This page is general guidance, not legal advice.