Vendor Due Diligence & Contracts — Foundations
Vendor Due Diligence & Contracts
EU AI Act Compliance Foundations EU/UK aligned
+ On this page
Key takeaways
- Contract terms must reflect real operational controls; require evidence rights and incident SLAs.
- Treat models and datasets as critical suppliers with security & safety attestations.
Overview & risk tiers
- Tier vendors by impact: Critical (model providers, data brokers), High (hosting, analytics), Standard (tools).
- Map each vendor to AI Act roles (Provider/Deployer/Importer/Distributor) and GDPR roles (Controller/Processor).
Due diligence questionnaire (DDQ)
- Model safety (evals, guardrails, jailbreak resistance), data governance, privacy engineering, logging/export.
- Change management, incident history, third-party audits, secure SDLC, content moderation pipeline.
Evidence & attestations
- Model cards, security whitepapers, SOC2/ISO certs, DPAs, bias & robustness reports, residual risk statements.
- Right to review redacted evals; sandbox access for validation; named technical contacts.
Security & privacy clauses
- Breach notification windows; encryption standards; secret management; logging & retention; DSR support.
- Prohibit training on your data without explicit consent; require pseudonymisation where feasible.
AI-specific clauses
- Use constraints; safety guardrail minimums; provenance/watermarking; clarity on synthetic data and waivers.
SLAs & incident response
- Availability & latency; safety incident SLAs; emergency kill-switch; regulator communication support.
IP & licensing
- Ownership of outputs; indemnities for infringement; training data licences; open-source component usage.
Data rights & retention
- Data residency; backups; deletion SLAs; post-termination data return/extract; audit logs availability.
Audit & oversight
- Right to audit or independent assurance; sampling rights; evidence snapshots per release.
Subprocessors & flow-downs
- Approve subprocessors; require equivalent protections; notify on changes; maintain an up-to-date list.
Exit & reversibility
- Plan data/package exports; transition assistance; escrow for critical artifacts; staged cutover runbook.
Checklist & red flags
- No model safety evidence; train-on-your-data by default; weak incident SLAs; no audit rights; opaque subprocessors.
© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 05 Nov 2025 • This page is general guidance, not legal advice.
Related Articles
Vendor Due Diligence & Contracts — Foundations
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Vendor Due Diligence & Contracts EU AI Act Compliance Foundations EU/UK aligned + On this page On this page Screening & criticality DD questionnaire ...
Implementation Checklists — Foundations
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Implementation Checklists (build → approve → operate) EU AI Act Compliance Foundations EU/UK aligned + On this page On this page Classify & plan ...
Provider vs Deployer — Responsibilities — Foundations
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Provider vs Deployer — Responsibilities EU AI Act Compliance Foundations EU/UK aligned + On this page On this page Roles & definitions Provider ...
Governance, Evidence & Records — Foundations
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Governance, Evidence & Records EU AI Act Compliance Foundations EU/UK aligned + On this page On this page Org structure & roles Policies & decision ...
Human Oversight Patterns — Foundations
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 05 Nov 2025 www.zenaigovernance.com ↗ Human Oversight Patterns EU AI Act Compliance Foundations EU/UK aligned + On this page On this page Oversight goals Oversight modes Escalation ...