Master AI Policy — Purpose, Roles, Requirements & Enforcement
Governance & Policies EU/UK Aligned
+ On this page
Key takeaways
- The Master AI Policy defines how AI systems are governed across their entire lifecycle.
- It enforces accountability, transparency, fairness, and oversight in all AI-related activities.
- It aligns with ISO/IEC 42001, NIST AI RMF, EU AI Act, and UK regulatory guidance (DSIT, ICO).
Purpose & alignment
The purpose of this policy is to ensure that all AI systems developed, deployed, or used by Zen AI Governance UK Ltd adhere to ethical, legal, and technical standards of trustworthiness and accountability.
This policy integrates and aligns with:
- ISO/IEC 42001: AI Management System (AIMS) governance requirements.
- NIST AI Risk Management Framework: Govern–Map–Measure–Manage functions.
- EU AI Act (2024 draft alignment): Obligations for high-risk AI systems.
- UK DSIT AI Regulation Framework (2024): Principles of safety, transparency, accountability, fairness, and contestability.
Scope of application
- Applies to all AI systems, datasets, and algorithms designed, procured, or operated by Zen AI Governance.
- Covers internal tools, client-facing systems, and AI used by third-party suppliers.
- Includes models from classical ML to LLMs, agentic workflows, and autonomous components.
- Applies across design, training, validation, deployment, operation, and retirement phases.
Core governance principles
- Accountability: Clear lines of responsibility for every AI system and decision.
- Transparency: Documented purpose, logic, data sources, and decision boundaries.
- Fairness & non-discrimination: Bias detection and correction mechanisms must be in place.
- Human oversight: Meaningful intervention and escalation channels available at all times.
- Safety & robustness: Systems must be designed with defence-in-depth controls.
- Data governance: Data quality, minimisation, and lawful processing assured through DPO review.
- Continuous improvement: Governance, controls, and models refined over time through audit evidence.
Roles & responsibilities
- AI Governance Board: Approves policy, oversees risk, and reports to the Director.
- Authorising Officer: Accountable for AI compliance and ISO 42001 certification scope.
- Compliance Lead: Maintains policies, tracks audits, and monitors regulatory changes.
- AI Product Owners: Ensure adherence to policy in design and operation phases.
- Developers & Data Scientists: Apply fairness, interpretability, and explainability controls.
- Human Oversight Officers: Manage escalation and post-deployment supervision.
- All employees: Must complete AI ethics & compliance training annually.
Mandatory policy requirements
- All AI systems must undergo AI Impact Assessment (AIA) before deployment.
- Each system must have an AI System Record (AISR) capturing ownership, purpose, and risk rating.
- High-risk systems must have Post-Market Monitoring (PMM) dashboards live before go-live.
- Each AI model must maintain a Model Card and Data Sheet for Datasets.
- Evidence must be captured via the Integrated Evidence Management System with an EV-ID per artefact.
- All changes require formal review and re-approval by the AI Governance Board.
AI lifecycle governance process
1️⃣ PLAN — Define objectives, risks, and metrics (AIA + RMF “MAP”)
2️⃣ DESIGN — Select models, datasets, and oversight structures
3️⃣ BUILD — Implement controls and training with explainability
4️⃣ VALIDATE — Test for bias, robustness, and compliance
5️⃣ DEPLOY — Verify governance sign-off and system readiness
6️⃣ MONITOR — Operate PMM dashboards and human oversight
7️⃣ IMPROVE — Capture audit findings, update policy, re-train models
Compliance & legal alignment
This policy is harmonised with major global regulatory frameworks:
- ISO/IEC 42001: Clauses §4–§10, focusing on AI-specific management controls.
- NIST AI RMF: GOVERN and MANAGE functions.
- EU AI Act: Art. 9–15 and Annex IV (technical documentation).
- UK ICO Guidance: AI and Data Protection (lawful processing, fairness, explainability).
- OECD AI Principles: transparency, robustness, and accountability for public trust.
Monitoring & review
- Quarterly internal audits assess policy compliance and evidence coverage.
- Annual management review evaluates KPIs, incidents, and emerging risks.
- All AI policies reviewed annually or upon regulatory change.
- Review findings captured in the Management Review Pack with CAPA assignments.
Breach management & enforcement
- Non-conformities logged in the CAPA tracker with assigned owners and deadlines.
- Repeated or critical breaches trigger escalation to the AI Governance Board.
- Disciplinary actions follow HR policy for negligence or deliberate non-compliance.
- Clients notified of material impacts under contract or regulatory obligations.
Implementation checklist
- AI Governance Board appointed and terms of reference approved.
- AI Impact Assessment template in use for all projects.
- Evidence system integrated with CI/CD and PMM tools.
- Quarterly policy compliance reports submitted to Director.
- Annual refresher training completed by all AI personnel.
© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 14 Nov 2025 • This page is general guidance, not legal advice.