Supplier & Third-Party Governance (ISO/IEC 42001:2023, EU/UK aligned)

Supplier & Third-Party Governance (ISO/IEC 42001:2023, EU/UK aligned)

Zen AI Governance — Knowledge Base EU/UK alignment Updated 07 Nov 2025 www.zenaigovernance.com ↗

Supplier & Third-Party Governance (ISO/IEC 42001:2023)

ISO/IEC 42001 – AIMS Supplier Governance EU/UK aligned
+ On this page
Key takeaways
  • Third-party risk is a core ISO/IEC 42001 focus — suppliers must operate under controls equivalent to your AIMS.
  • Adopt a structured Supplier Governance Framework with due-diligence, contractual clauses, and continuous monitoring.
  • Maintain a live Supplier Register linking every vendor to risk ratings, evidence, and CAPA.

Overview & importance

External providers (model APIs, data vendors, annotation firms, cloud AI platforms) can directly influence AI safety, fairness, privacy and compliance. ISO/IEC 42001 expects you to specify how such suppliers are selected, controlled, and monitored. This article defines the policy, process and evidence model to achieve that in an EU/UK context.

Supplier policy & framework

  • Policy scope: any party whose service affects training data, model behaviour, evaluation, deployment, or user impact.
  • Governance: Procurement runs the workflow; AIMS Manager maintains the Supplier Register; Legal/Privacy approve contracts; Security validates technical controls.
  • Principles: transparency, proportionality (tiered control), auditability, and timely escalation.

Classification & risk tiers

  • Tier 1 — Critical: foundation-model APIs, dataset providers with PII/SCD, managed inference/hosting; annual audits + quarterly reviews.
  • Tier 2 — Important: evaluation/monitoring vendors, guardrail services, vector DBs; semi-annual review.
  • Tier 3 — Low-impact: training vendors, advisory; biennial light review.
  • Criteria: data sensitivity, autonomy, regulatory exposure, sub-processor depth, availability dependency, geography.

Due-diligence process

Perform DD before onboarding and at renewal. Record decisions and evidence.

  • Questionnaire: AI safety/fairness controls, privacy/security posture, model cards, eval results, incident history, sub-processors, locations.
  • Evidence: ISO 27001/27701, SOC 2, AI governance statements, DPIA support, bias reports, penetration tests.
  • Scoring: 1–5 maturity per domain; calculate composite risk and assign tier & review cadence.
  • Approvals: Procurement → AIMS Manager → Legal/Privacy → Authorising Officer.
  • Conditions: CAPA or controls (e.g., data residency, key management) before go-live.
  • AI obligations: cooperation with regulatory inquiries; model change notifications; eval data access; audit rights; incident notification ≤72h.
  • Data protection: Article 28 processor clauses, SCCs/IDTA, DSR assistance, sub-processor transparency and consent.
  • Ethics & fairness: commitment to bias testing and human-oversight capability; harmful content safeguards.
  • Termination & exit: data return/destruction, transition support, escrow for critical artefacts.

Ongoing monitoring & assurance

  • Quarterly/Monthly (tiered): KPI review (availability, latency, harmful rate, fairness drift); confirm security posture and sub-processor changes.
  • Attestations: updated ISO/SOC reports or supplier AI compliance statements annually.
  • Changes: notify within 30 days for model/dataset/region/sub-processor updates; run impact assessment.
  • Triggers: incidents, threshold breaches, or unresolved CAPA → escalate to Oversight Board / Release Board.

Integration with AIMS & risk register

  • Every supplier has a Risk ID in the AI Risk Register with owner, residual score, and compensating controls.
  • Supplier status informs release gates; red/amber vendors require additional approvals or mitigations.
  • Supplier issues feed CAPA and management review to adjust appetite or resourcing.

Evidence & record keeping

  • Supplier Register: ID, name, tier, service, geography, data categories, DD score, review dates.
  • DD pack: questionnaire, artefacts, findings, approvals, conditions/CAPA.
  • Contract repo: version-controlled with clause checklist; renewal and expiry alerts.
  • Monitoring evidence: KPI exports, dashboards, incident tickets, meeting minutes with actions.
  • Retention: ≥3 years (or per policy); immutable snapshots for audits.

KPIs & indicators

  • DD completion rate (% active suppliers with valid DD).
  • Median CAPA closure time (supplier-origin incidents).
  • % suppliers with up-to-date attestations (ISO/SOC/AI statements).
  • # changes notified on time vs late.
  • Incident rate attributable to third-parties per quarter.

Worked examples

Example 1 — Foundation Model API Supplier
  • Service: EU-hosted LLM API.
  • Risks: harmful content, PII leakage, unannounced model updates.
  • Controls: allowlist egress, content filter, eval-gated releases, model version pinning, incident SLA 24h.
  • Evidence: SOC 2 Type II, bias test summary, monthly KPI exports.
  • Status: Tier 1 critical — annual audit + quarterly reviews.
Example 2 — Annotation Vendor
  • Service: Human labelling for safety and fairness datasets.
  • Risks: labeller bias, data leakage, weak access controls.
  • Controls: de-identification, secure VDI, RBAC, QC sampling, inter-annotator agreement ≥0.8.
  • Evidence: training logs, QC reports, access reviews, DPIA addendum.
  • Status: Tier 2 important — semi-annual reviews + spot audits.
Example 3 — Vector DB as a Service
  • Service: Managed vector store powering RAG.
  • Risks: residency drift, inference leakage, index poisoning.
  • Controls: region locks, tenant keys, signed writes, ingestion validation, query-time PII filter.
  • Evidence: config screenshots, encryption proofs, attack simulation logs.
  • Status: Tier 1 critical — quarterly reviews + red-team once per year.

Common pitfalls & mitigation

  • One-time DD only: no monitoring → use tiered cadences and KPI hooks into management review.
  • Missing AI clauses: contracts lack audit/change/incident terms → use an AI clause checklist at legal review.
  • Shadow suppliers: teams trial tools without DD → enforce procurement gates and discovery scans.
  • Evidence sprawl: artefacts scattered → central Supplier Register with links and immutable snapshots.

Implementation checklist

  • Supplier Policy published; roles & workflow defined.
  • Supplier Register active with tiering and review dates.
  • DD questionnaire + scoring model in use; approvals recorded.
  • Contracts include AI clauses, DPAs, audit rights, incident SLAs.
  • Monitoring cadence live; KPIs reported; CAPA tracked.
  • Immutable evidence snapshots retained for audits.

© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 07 Nov 2025 • This page is general guidance, not legal advice.