Unified Risk Register Template (ISO + NIST + EU AI Act)
ISO 42001 ↔ NIST AI RMF EU AI Act Alignment
+ On this page
Key takeaways
- One risk register for all frameworks prevents duplication and audit gaps.
- Every risk is linked to a use case, model version, controls, evidence, and CAPA.
- Scoring must be consistent, explainable, and reviewed after material change.
Overview & goals
The Unified Risk Register is the authoritative source of AI risk truth. It integrates ISO 42001 planning/operations, NIST RMF risk functions, and EU AI Act obligations.
It enables traceability from risk identification → control selection → monitoring metrics → CAPA closure → management review.
Data model & required fields
Each row (risk record) must include the following minimum fields:
- Risk ID (e.g., AI-RSK-2025-014)
- Use Case / System (name, version, owner, environment)
- Model Type (LLM, RAG, Classifier, Recommender, Agent)
- EU AI Act Category (Minimal/Limited/High/Prohibited + Annex III ref if applicable)
- Risk Title & Description (concise + what could go wrong)
- Harm Domains (technical, ethical, privacy, legal, operational, societal)
- Causes / Triggers (e.g., prompt injection, data drift)
- Existing Controls (design/operational/oversight controls)
- Likelihood (1–5) & Impact (1–5) & Detectability (1–5)
- Risk Score (RPN) = L × I × D
- Risk Owner (role & contact)
- Treatment (mitigate/transfer/accept/avoid) & Waiver Ref (if accept)
- Planned Actions (with due dates & SLA)
- Residual Score (post-control RPN) & Status (Open/Monitoring/Closed)
- Evidence Links (Audit ID, PMM reports, test runs, model card, data sheet)
- Review Cycle (monthly/quarterly/after-change) & Last Review
Scoring model (Likelihood × Impact × Detectability)
- Scale 1–5 for each factor; calibrate with examples to ensure consistency.
- RPN bands: 1–25 (Low), 26–50 (Medium), 51–75 (High), 76–125 (Critical).
- Detectability: higher score = harder to detect → increases risk (keeps auditors satisfied).
- Thresholds: Critical requires immediate CAPA + Oversight review; High requires action within 30 days.
Standards mapping (ISO / NIST / EU)
| Register Field | ISO/IEC 42001 | NIST AI RMF | EU AI Act |
|---|
| Use Case / System | §4.3, §8 | MAP | Annex III (classification) |
| Risk Description & Causes | §6.1 | MAP | Art. 9 RMS |
| Controls & Metrics | §8 (ops), §9.1 | MEASURE | Art. 15 (accuracy/robustness) |
| Treatment & CAPA | §10 | MANAGE | Art. 62 (corrective) |
| Evidence Links | §9.2 audit | GOVERN/MANAGE | Annex IV (tech docs) |
Lifecycle workflow & SLAs
New → Assessed → Approved → Mitigating → Monitoring → Closed
SLA: Critical = 7d; High = 30d; Medium = 60d; Low = 90d
Triggers to Reassess: retrain, data shift >5%, policy/regulatory change, incident.
CSV Header Template (copy into Excel/Sheets)
RiskID,UseCase,SystemVersion,Owner,ModelType,EUCategory,AnnexIIIRef,RiskTitle,RiskDescription,HarmDomains,Causes,ExistingControls,Likelihood,Impact,Detectability,RPN,Treatment,WaiverRef,PlannedActions,ResidualRPN,Status,EvidenceLinks,ReviewCycle,LastReview,Notes
JSON Schema (for tool integration / APIs)
{
"riskId": "AI-RSK-2025-014",
"useCase": "Customer Chatbot – Complaints",
"systemVersion": "v2.3",
"owner": "AI Operations Lead",
"modelType": "LLM",
"euCategory": "High",
"annexIIIRef": "5(b)",
"riskTitle": "Hallucinated complaint resolution",
"riskDescription": "LLM proposes incorrect remedy without citation causing customer harm.",
"harmDomains": ["technical","ethical","legal","operational"],
"causes": ["poor retrieval","prompt injection","insufficient guardrails"],
"existingControls": ["RAG provenance","post-gen moderation","human oversight"],
"likelihood": 3,
"impact": 4,
"detectability": 3,
"rpn": 36,
"treatment": "mitigate",
"waiverRef": null,
"plannedActions": [
{"action":"tighten similarity threshold to ≥0.82","due":"2025-12-15"},
{"action":"escalate uncertainty >25% to human","due":"2025-11-30"}
],
"residualRpn": 24,
"status": "Monitoring",
"evidenceLinks": ["EV-42001-091","AUD-INT-2025-07","PMM-2025-Q4"],
"reviewCycle": "Quarterly",
"lastReview": "2025-11-10",
"notes": "Add adversarial eval set"
}
Worked examples (LLM, RAG, Classifier)
- LLM — Toxic output risk: L=2, I=4, D=3 → RPN=24 (Medium). Controls: toxicity filter, human-in-the-loop, RLHF; Residual=12.
- RAG — Broken citation risk: L=3, I=3, D=4 → RPN=36 (Medium/High). Controls: provenance ledger, cosine ≥0.80, footnoted links; Residual=18.
- Classifier — Demographic bias: L=3, I=5, D=3 → RPN=45 (High). Controls: parity/TPR gap ≤5%, periodic reweighting, fairness monitors; Residual=25.
Dashboards & thresholds
- Register Coverage %: all systems with active risk records (target ≥ 95%).
- High/Critical Count: by business unit; trend must be ↓ quarter-on-quarter.
- Overdue Actions %: CAPA past SLA (target ≤ 5%).
- Residual Risk Heatmap: by model type and region.
Common pitfalls & good practice
- Risk duplication: merge similar risks under one control set with system-specific qualifiers.
- Uncalibrated scoring: hold quarterly calibration sessions with real examples.
- Missing detectability: include D to reflect monitoring maturity.
- No evidence link: every record must have at least one Evidence ID.
Implementation checklist
- CSV/JSON templates approved by Compliance Lead.
- Risk register integrated with PMM, CAPA, and audit repositories.
- Thresholds and SLAs configured; alerts routed to Oversight.
- Quarterly calibration & Management Review in place.
- Evidence reuse enabled for ISO audits and EU technical documentation.
© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 13 Nov 2025 • This page is general guidance, not legal advice.