Risk Management Framework & Treatment Plan (Clause 6.1 — EU/UK aligned)
Risk Management Framework & Treatment Plan (ISO/IEC 42001:2023)
ISO/IEC 42001 – AIMS Risk Management EU/UK Aligned
+ On this page
Key takeaways
- Risk management under ISO 42001 must address AI-specific risks — bias, transparency, drift, security, misuse, and oversight.
- Use a quantitative risk model and map each risk to owners, controls, and evidence.
- Link the AI risk register to CAPA, supplier governance, and management review.
Overview & principles
Clause 6.1 requires a structured process to identify, analyse, evaluate, treat, and monitor AI risks throughout the lifecycle. Align with ISO 31000 concepts and incorporate EU/UK regulatory expectations (e.g., EU AI Act risk classes, ICO guidance). The outcome is a living AI Risk Register with clear ownership, treatment plans, and measurable residual risk.
Risk framework structure
- Governance: AIMS Manager owns the framework; Authorising Officer approves risk appetite and residual acceptances.
- Scope: All AI systems that affect people, safety, compliance, or material business outcomes.
- Process: Identify → Analyse → Evaluate → Treat → Monitor → Review (PDCA-aligned).
- Integration: Connect with incident management, Supplier Governance, and Management Review (Clause 9.3).
Risk identification
Use both top-down (scenario/threat) and bottom-up (data/model metrics) discovery. Catalogue risks per AI system and lifecycle stage.
- Data risks: bias in training data, provenance gaps, licensing/consent issues, data leakage, drift.
- Model risks: hallucination, lack of explainability, robustness to adversarial prompts, overfitting, unsafe autonomy.
- Operational risks: failed human oversight, inadequate logging, change control lapses, monitoring blind spots.
- Security risks: prompt-injection, exfiltration, poisoning, supply-chain compromise, dependency vulnerabilities.
- Legal/ethical risks: GDPR/UK GDPR non-compliance, AI Act classification, discriminatory impact, transparency failures.
Analysis & evaluation
- Likelihood scale: 1 (Very Low) to 5 (Very High).
- Impact scale: 1 (Negligible) to 5 (Severe) — consider harm to individuals/society, legal exposure, safety.
- Risk score: Likelihood × Impact = 1–25, with RAG: Green ≤5, Amber 6–14, Red ≥15.
- Control mapping: Preventive / Detective / Corrective controls; link to clauses (e.g., oversight, transparency, security).
- Acceptance criteria: define residual thresholds per risk class; anything Red requires mitigation or AO approval.
Treatment & acceptance
- Options: Mitigate (controls), Avoid (change design), Transfer (contract/insurance), Accept (with AO sign-off).
- Every risk has an owner, treatment plan, and due date; tie actions to CAPA IDs.
- Recalculate residual score after controls; document justification for acceptance.
Risk register & controls
Template — AI Risk Register fields
- ID, AI System, Description, Clause Ref, Likelihood, Impact, Score, Owner, Controls (P/D/C), Residual, Status, CAPA ID, Evidence Link.
Monitoring & review
- Quarterly review or after major releases/incidents; refresh scores with live bias/robustness/incident metrics.
- Trigger re-assessment on supplier changes, model upgrades, or new jurisdictions.
- Feed results to Management Review and include in surveillance evidence.
Integration with AIMS & EU AI Act
- Map systems to AI Act risk categories; if high-risk, ensure technical documentation, post-market monitoring, and oversight patterns are in place.
- Connect risks to Supplier Governance (third-party controls) and Incident Management.
- Align DPIAs and transparency statements with risk outcomes.
KPIs & dashboards
- % of identified risks with implemented controls ≥ 90%.
- Median time to treat Red risks ≤ 30 days.
- Residual risk trend quarter-on-quarter (↓ preferred).
- % AI systems with risk assessment updated in ≤ 6 months.
Templates & examples
Example — Risk Entry
ID: AI-R-015
AI System: Credit Decisioning RAG+LLM
Description: Disparate impact on protected groups due to biased retrieval context.
Likelihood: 4 Impact: 5 Score: 20 (Red)
Controls: (P) bias-aware sampling, (D) fairness monitor weekly, (C) human override w/ rollback
Treatment: retrain on balanced corpus; add demographic parity check; tighten retrieval filters
Residual: 3 x 3 = 9 (Amber) Owner: ML Lead Status: In progress (CAPA-123)
Evidence: Fairness report Q4 2025; eval dashboards; oversight logs
Common pitfalls & mitigation
- Static register: make it operational — auto-ingest KPI feeds and incident refs.
- No AO approval: record explicit sign-off for accepted residual risks.
- Generic controls: bind each risk to a specific system, metric, and log evidence.
- Poor traceability: unique IDs, CAPA linkage, and immutable snapshots for audits.
Implementation checklist
- Risk framework approved; appetite stated and published.
- Risk register populated (owners, due dates, evidence links).
- Scoring model applied consistently across systems.
- Treatment plans executed; residuals documented and approved.
- Quarterly reviews evidenced; trends reported to governance.
© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 08 Nov 2025 • This page is general guidance, not legal advice.
Related Articles
AI Risk Management Framework (ISO 42001 + NIST AI RMF Mapping)
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 10 Nov 2025 www.zenaigovernance.com ↗ AI Risk Management Framework (ISO/IEC 42001 + NIST AI RMF Mapping) ISO/IEC 42001 – AIMS NIST AI RMF EU/UK aligned + On this page On this page ...
Management Review & Performance KPIs (EU/UK aligned)
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Management Review & Performance KPIs (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Management Review Performance Metrics + On this page On this page ...
Internal Audit & Evidence Management (EU/UK aligned)
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 07 Nov 2025 www.zenaigovernance.com ↗ Internal Audit & Evidence Management (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Internal Audit Evidence Management + On this page On this page ...
Training, Competence & Awareness Framework
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Training, Competence & Awareness Framework (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Competence & Awareness EU/UK aligned + On this page On this page ...
Human Oversight (EU/UK Aligned)
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Human Oversight (EU/UK aligned) ISO/IEC 42001 – AIMS Human Oversight EU/UK aligned + On this page On this page Overview & importance Objectives & ...