Obligations for High-Risk AI Systems — Lifecycle Overview & Requirements

Obligations for High-Risk AI Systems — Lifecycle Overview & Requirements

Zen AI Governance — Knowledge Base EU AI Act Compliance Updated 17 Nov 2025 www.zenaigovernance.com ↗

Obligations for High-Risk AI Systems (EU/UK Aligned)

EU AI Act Compliance High-Risk Systems
+ On this page
Key takeaways
  • High-risk AI systems must implement a continuous Risk Management System (RMS) and record-keeping as per Annex IV.
  • Data quality, bias mitigation, and human oversight are mandatory and auditable controls.
  • Systems require CE marking or UK conformity assessment before deployment.

Classification & criteria

  • AI systems fall under Annex III categories (e.g., biometrics, education, employment, credit scoring, public services).
  • Classification requires formal risk assessment and mapping to use-case context per EU AI Act Article 6.
  • Zen AI Governance maintains a Risk Catalogue documenting all use cases and their risk status (EV-IDs).

Risk-management system (RMS)

The RMS integrates ISO 42001 and NIST AI RMF methodologies to control technical and ethical risks throughout the AI lifecycle.

PhaseRisk ActivitiesOutputs & Evidence
DesignRisk identification, impact analysis, mitigation planRMS Template (RM-ID)
DevelopmentControl implementation, validation testsTest Logs + Bias Report
DeploymentResidual risk approval, oversight confirmationBoard Sign-off (EV-ID)
OperationIncident monitoring, continuous improvementCAPA Log + PMM Metrics

Data & dataset governance

  • Comply with Article 10 (EU AI Act): datasets must be relevant, representative, error-free, and free from bias.
  • Maintain Data Sheet for Datasets listing source, composition, licensing, and ethics review date.
  • Perform bias testing pre- and post-deployment with documented fairness metrics (Gini, TPR gap, EO diff).

Technical documentation (Annex IV)

  • Maintain Technical Documentation File (TDF) for each AI system including:
    • Design and intended purpose statement.
    • System architecture diagram & data flow map.
    • Algorithms and training procedures (versions, hyperparameters).
    • Validation & test results (accuracy, robustness, cybersecurity).
    • Risk management and mitigation plans.
    • Post-market monitoring arrangements.
  • All documents are version-controlled and stored in the Evidence Repository (EV-IDs).

Transparency & user information

  • Provide clear instructions for use, limitations, and required human oversight per Article 13.
  • Include visible disclosure: “This system uses AI for decision support under oversight by Zen AI Governance.”
  • Maintain User Manual & Transparency Notice (Annex IV Section 2.6).

Human oversight & control

  • Assign named Oversight Officer for each high-risk system.
  • Define intervention thresholds and rollback mechanisms.
  • Implement interfaces for manual override and incident logging.

Accuracy & robustness

  • Define minimum accuracy requirements and error tolerances in TDF.
  • Conduct adversarial and stress testing before release and after updates.
  • Log model performance continuously and review monthly.

Conformity assessment & CE marking

  • Follow conformity route per Article 43:
    • Internal control (Annex VI) or Notified Body audit (Annex VII).
  • Compile EU Declaration of Conformity signed by Authorising Officer.
  • Affix CE mark (or UKCA mark for UK deployments) to product documentation.

Post-market monitoring & reporting

  • Implement Post-Market Monitoring Plan (PMMP) covering data collection, incident classification, and KPIs.
  • Report serious incidents to authorities within 15 days (Article 62).
  • Update risk management and CAPA logs after incident closure.
  • Annual PMM summary submitted to AI Governance Board and Notified Body.

Implementation checklist

  • Risk Management System active and linked to ISO 42001 AIMS.
  • Technical Documentation complete and Annex IV aligned.
  • Transparency notices published for users and clients.
  • Human oversight roles assigned and training complete.
  • CE mark granted or UK conformity file on record.
  • Post-Market Monitoring dashboard live and audited.

© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 17 Nov 2025 • This page is general guidance, not legal advice.