Incident Management & Post-Market Monitoring (EU AI Act aligned)

Incident Management & Post-Market Monitoring (EU AI Act aligned)

Zen AI Governance — Knowledge Base EU/UK alignment Updated 08 Nov 2025 www.zenaigovernance.com ↗

Incident Management & Post-Market Monitoring (EU AI Act aligned)

ISO/IEC 42001 – AIMS Incident & PMM Process EU/UK aligned
+ On this page
Key takeaways
  • ISO 42001 & EU AI Act require continuous incident detection, documentation, analysis, and reporting.
  • Define severity classes (Minor / Major / Critical) and regulatory trigger criteria for notification to authorities.
  • Integrate AI post-market monitoring (PMM) with risk, oversight, and CAPA for true “learning loops.”

Overview & objectives

Incident management ensures that AI-related failures, unexpected outputs, or harms are identified, contained, analysed, and resolved systematically. Clause 10.2 of ISO 42001 requires documented procedures for nonconformity and corrective action; the EU AI Act adds obligations for incident notification and post-market surveillance for high-risk systems.

Incident definitions & severity

  • AI incident: any failure, malfunction, performance degradation, or breach that results in or could result in harm to individuals or society.
  • Non-conformity: deviation from AIMS policy or procedure (e.g., missing record, unauthorised model update).
  • Severity levels:
    • Minor – internal issue with no external impact.
    • Major – affects users or performance but contained without harm.
    • Critical – actual or potential harm to individuals or violation of law; must be reported to regulator within 15 days (EU AI Act Art 62).

End-to-end workflow

  1. Detection: triggered by monitoring alerts, user reports, oversight interventions, or testing results.
  2. Logging: create incident record (AI-INC-###) with timestamp, system, summary, owner, severity.
  3. Containment: pause model deployment or switch to safe mode; notify Oversight Officer.
  4. Analysis: root-cause analysis (technical + process + human factors); identify risk register links.
  5. Correction: rollback, patch, or dataset fix implemented and validated.
  6. CAPA: open corrective/preventive action record with due dates and verification.
  7. Communication: stakeholder update + regulatory notification if trigger criteria met.
  8. Closure: review by Authorising Officer + archival of evidence and sign-off.

Internal & regulatory reporting

  • Internal timeline: log within 24 h of detection; contain within 48 h; corrective plan within 5 days.
  • Regulatory reporting: high-risk AI providers must notify competent authority (Art 62 AI Act) of any incident causing or likely to cause harm.
  • Notification contents: system ID, description, root cause, impact assessment, measures taken, recurrence prevention plan.
  • Communication plan: use pre-approved templates to ensure consistency and legal sign-off before submission.

Roles & responsibilities

  • Incident Manager: coordinates end-to-end response and records evidence.
  • System Owner: provides technical analysis and implements fixes.
  • Oversight Officer: verifies containment and authorises rollback or restart.
  • Compliance Lead: handles regulatory notifications and CAPA tracking.
  • Authorising Officer: approves closure and sign-off for records.

Post-Market Monitoring (PMM)

PMM is a proactive system for detecting issues in production and feeding back lessons into risk and oversight designs. Required by EU AI Act Art 61.

  • Monitor bias shift, accuracy degradation, and drift metrics continuously.
  • Correlate incident frequency with updates and dataset changes.
  • Collect user feedback and complaints as input signals for investigation.
  • Review findings quarterly in Management Review and update controls as needed.

Data & metrics collection

  • Drift rate, bias index, incident count, CAPA closure time, mean time to rollback (MTTR).
  • Monitor model confidence, override frequency, and human review rates.
  • Use thresholds to auto-flag anomalies for investigation.

Dashboards & analysis

  • Integrate PMM metrics into central AIMS dashboard.
  • Provide RAG status of open incidents, severity distribution, and closure rates.
  • Trend analysis for auditors and regulators (rolling 12-month view).

Integration with CAPA & risk

  • Each incident links to a CAPA record with root-cause, actions, and verification.
  • Update the Risk Register scores based on incident findings.
  • Feed trends into training programs and policy updates.

Templates & examples

Template — AI Incident Record
ID: AI-INC-047   System: RiskScorer-LLM   Date: 2025-11-01  
Reported by: Oversight Operator  
Severity: Major   Status: Under Investigation  
Description: Incorrect credit rating generated due to outdated model parameters.  
Containment: Rollback to v2.1 model; notifications to affected users sent within 48 h.  
Root Cause: Change control failure – deployment script bypassed validation.  
Corrective Action: CI/CD pipeline updated with approval gates.  
Preventive Action: Retraining alerts added to PMM dashboard.  
Verification: Test passed 2025-11-03 | AO Sign-off 2025-11-05.
  

Common pitfalls & mitigation

  • No central register: maintain single AI Incident Register with unique IDs and status tracking.
  • Late reporting: set automated alerts for critical events to meet 15-day notification window.
  • Root cause not validated: require CAPA verification evidence before closure.
  • Incomplete feedback loop: ensure incidents feed back into Risk Register and Oversight design.

Implementation checklist

  • Incident & PMM Policy approved and published.
  • Incident Register and CAPA tracker implemented with version control.
  • Roles assigned (Incident Manager, Oversight, Compliance Lead).
  • Notification templates validated for regulators and stakeholders.
  • PMM dashboard active with live drift and bias metrics.
  • Quarterly PMM reviews and evidence archived for audits.

© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 08 Nov 2025 • This page is general guidance, not legal advice.

    • Related Articles

    • Transparency, Records & Technical Documentation (EU AI Act aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Transparency, Records & Technical Documentation (EU AI Act aligned) ISO/IEC 42001 – AIMS Transparency & Records EU/UK aligned + On this page On this ...
    • Management Review & Performance KPIs (EU/UK aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Management Review & Performance KPIs (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Management Review Performance Metrics + On this page On this page ...
    • Risk Management Framework & Treatment Plan (Clause 6.1 — EU/UK aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Risk Management Framework & Treatment Plan (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Risk Management EU/UK Aligned + On this page On this page ...
    • Internal Audit & Evidence Management (EU/UK aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 07 Nov 2025 www.zenaigovernance.com ↗ Internal Audit & Evidence Management (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Internal Audit Evidence Management + On this page On this page ...
    • AI Risk Management Framework (ISO 42001 + NIST AI RMF Mapping)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 10 Nov 2025 www.zenaigovernance.com ↗ AI Risk Management Framework (ISO/IEC 42001 + NIST AI RMF Mapping) ISO/IEC 42001 – AIMS NIST AI RMF EU/UK aligned + On this page On this page ...