Creating AI Risk Profiles by Use Case & Model Type

Creating AI Risk Profiles by Use Case & Model Type

Zen AI Governance — Knowledge Base EU/UK alignment Updated 11 Nov 2025 www.zenaigovernance.com ↗

Creating AI Risk Profiles by Use Case & Model Type

NIST AI RMF Implementation Risk Profiling & Governance
+ On this page
Key takeaways
  • AI Risk Profiles standardise risk assessment across all AI use cases and model types.
  • Profiles record intended purpose, potential harms, mitigations, and regulatory classification.
  • They are mandatory artefacts under ISO 42001 §6.1, NIST RMF “MAP”, and EU AI Act Annex III.

Overview & purpose

An AI Risk Profile describes the specific risks associated with a given AI use case or model type. It links context, technical design, data dependencies, potential harms, and applicable controls into a unified record. Risk profiles help Zen AI Governance apply consistent risk decisions, monitor changes, and demonstrate compliance to regulators and auditors.

Methodology & structure

Each AI Risk Profile follows a 6-step structured method aligned to ISO/IEC 42001 and NIST RMF:

  1. Define the use case: Business goal, scope, users, affected parties, system boundaries.
  2. Classify risk level: Using EU AI Act Annex III categories (Minimal / Limited / High / Prohibited).
  3. Identify potential harms: Ethical, safety, legal, societal, and operational impacts.
  4. Map controls: Corresponding mitigations across data, model, and process controls.
  5. Rate likelihood & impact: Using 1–5 scales (consistent with AIMS Risk Register).
  6. Document oversight, evidence & status: Owner, review cycle, and supporting documentation links.

Risk taxonomy & categories

Zen AI Governance uses the following risk taxonomy for profiling AI systems:

  • Technical Risks: Bias, robustness, model drift, explainability failure.
  • Ethical Risks: Discrimination, manipulation, autonomy erosion.
  • Data Risks: Privacy, security, accuracy, provenance errors.
  • Operational Risks: Incorrect deployment, monitoring gaps, process dependency.
  • Legal & Regulatory Risks: Breach of GDPR, IP, sector-specific AI obligations.
  • Societal Risks: Disinformation, human harm, reputational damage.

Templates & data fields

AI Risk Profile Template (Standard Fields)
Profile ID: RPF-2025-001
Use Case: Customer Chatbot – Complaint Handling
Model Type: Large Language Model (LLM)
Purpose: Automate first-line complaint triage
Owner: AI Operations Lead
Regulatory Category: High-risk (EU AI Act Annex III, 5(b))
Key Risks:
 • Bias in response or escalation
 • Privacy leakage via prompt injection
 • Hallucinated resolutions or misinformation
Risk Controls:
 • Prompt filtering & RLHF tuning
 • Human oversight escalation
 • Audit logging & secure context memory
Residual Risk: Medium (RPN 36)
Review Frequency: Quarterly
Evidence Linked: CAPA#014, Audit Report AIMS-AUD-23-04
Status: Active ✅
  

Model type examples

  • Classification Models: Credit scoring, medical diagnostics — risks of bias, explainability gaps, false negatives.
  • Generative Models (LLMs): Hallucinations, misinformation, IP leakage, toxicity.
  • Recommendation Engines: Filter bubbles, manipulation of user behaviour, unfair exposure.
  • RAG Systems: Source citation risk, outdated retrieval content, broken provenance links.
  • Autonomous Agents: Escalation delay, human override limits, cascading effects.

Integration with ISO/NIST/EU AI Act

  • ISO/IEC 42001: Clause 6.1 (Risk Assessment), 8.2 (Information Requirements), 9.1 (Monitoring).
  • NIST AI RMF: MAP and MANAGE functions – contextualisation, risk evaluation, mitigation tracking.
  • EU AI Act: Articles 9–15 and Annex III risk categories, linking profiles to post-market monitoring obligations.

Review & maintenance

  • Risk Profiles reviewed quarterly or after any major model retraining or policy change.
  • Updates approved by Compliance Lead and stored in AIMS repository.
  • Versioning required (v1.x, v2.x, etc.) to maintain traceability over time.
  • Profiles feed Management Review input data for continuous improvement.

Common pitfalls & good practice

  • Too generic: Each Risk Profile must be specific to a system and version.
  • No linkage: Cross-reference with AIMS Risk Register, CAPA, and Audit Logs.
  • Unreviewed assumptions: Regularly test residual risk assumptions using evidence.
  • Missing transparency: Include a short lay summary for users and regulators.

Implementation checklist

  • AI Risk Profile Template integrated into project onboarding workflow.
  • Profiles created and reviewed per use case and model type.
  • All risks linked to AIMS Risk Register and CAPA tracker.
  • Profiles approved by Compliance Lead and version-controlled.
  • Quarterly updates fed into AI Governance Board reporting.

© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 11 Nov 2025 • This page is general guidance, not legal advice.