Transparency, Records & Technical Documentation (EU AI Act aligned)

Transparency, Records & Technical Documentation (EU AI Act aligned)

Zen AI Governance — Knowledge Base EU/UK alignment Updated 08 Nov 2025 www.zenaigovernance.com ↗

Transparency, Records & Technical Documentation (EU AI Act aligned)

ISO/IEC 42001 – AIMS Transparency & Records EU/UK aligned
+ On this page
Key takeaways
  • Transparency is both user-facing (clear notices & rights) and regulatory-facing (technical documentation & evidence).
  • High-risk systems require structured technical documentation, records, and traceable logs that auditors can sample.
  • Documentation must remain in sync with risk register, human oversight design, and post-market monitoring.

Overview & principles

ISO/IEC 42001 requires organisations to maintain documented information that proves policies and controls are in place and effective. The EU AI Act adds prescriptive obligations for technical documentation, record-keeping, and transparency notices for users and authorities. This article defines what to publish, what to retain internally, and how to keep it audit-ready.

User-facing transparency

  • AI use notice: Clearly inform when users interact with or are subject to AI output; provide a plain-language summary of purpose, limitations, and human review options.
  • Rights & recourse: Instructions for complaints, human review requests (where applicable), and accessibility support.
  • Change notifications: When material model changes affect users (accuracy, fairness, scope), post release notes and update notices.
  • Channel placement: in product UI, KB articles, privacy notice, and onboarding flows; avoid dark patterns.

Technical documentation (AI Act)

Maintain a living dossier for each AI system. For high-risk systems, include:

  • System description: intended purpose, use scope, users, affected populations, operating context, limitations.
  • Architecture overview: components, data sources, training/evaluation pipelines, runtime controls, third-party services.
  • Training & evaluation: datasets (provenance, licences), data cleaning, split strategy, metrics, baseline comparisons.
  • Risk management alignment: identified risks, treatments, residuals, references to the AI Risk Register.
  • Human oversight design: oversight pattern (HITL/HOTL), thresholds, UI controls, escalation, rollback procedure.
  • Post-market monitoring (PMM): monitoring metrics, drift detectors, incident intake, trigger thresholds.
  • Security posture: threat model, hardening, prompt-injection protections, secrets and egress controls.
  • Change control: versioning, release gates, eval gates, rollback plans; date-stamped change log.
  • Compliance mapping: clause mapping to ISO 42001, AI Act obligations, GDPR/UK GDPR linkages (e.g., DPIA).

Records & logs

  • Audit logs: inputs/outputs (appropriately minimised), decisions, overrides, timestamps, actors, versions.
  • Training records: who trained/approved, datasets used, licences, model cards, evaluation artefacts.
  • Operational records: incidents, CAPA, change approvals, rollback events, performance dashboards.
  • Supplier records: DD questionnaires, attestations, SLAs, sub-processor updates, security reports.
  • Retention: define legal/contractual minima (e.g., ≥3 years) and secure destruction protocols.

Model Cards & System Cards

Adopt concise yet auditable artefacts to summarise model/system behaviour:

  • Model Card: purpose, data summary, key metrics (accuracy, calibration, fairness), risks/limitations, eval date, owner, version.
  • System Card: end-to-end view: model(s) + retrieval + policy + oversight; user segments; safety controls; monitoring KPIs.
  • Publish redacted versions externally; keep full cards internally with evidence links.

Explainability statements

  • Plain-English description of how the system reaches outputs and what uncertainty means for the user.
  • Scope the limits: when the model may fail; when human review is required; what features drive outcomes (where appropriate).
  • Provide channel-appropriate depth: short UI hints, deeper KB article, and technical appendix in the dossier.

Data governance disclosures

  • Provenance & licensing for datasets; transformations; de-identification; minimisation & purpose limits.
  • Personal data handling: lawful basis, DPIA summary, DSR handling, cross-border safeguards.
  • Content moderation & safety datasets: curation, reviewer training, quality controls, harmful-content filters.
  • Each documentation section references the Risk Register item(s) and CAPA IDs.
  • Oversight UI screenshots and thresholds embedded with version tags.
  • PMM metrics (drift, bias, incident rates) linked into dashboards with export snapshots.

Storage, retention & access

  • Authoritative store: a version-controlled repository (e.g., WorkDrive/SharePoint + Git for tech docs).
  • Immutability: PDF snapshots for audits (hash or e-signature); keep editable sources alongside.
  • Access control: RBAC; read-only for most; approver workflow for release.
  • Index: a “Dossier Index” page listing artefacts, owners, and last review dates.

Templates & examples

Template — Technical Documentation Index
System: Customer Support Assistant (RAG + LLM)
Owner: AI Product Owner   AIMS ID: SYS-CSA-001   Version: 2.3 (2025-10-28)
1. Purpose & Scope
2. Architecture Diagram & Data Flows
3. Training & Evaluation (datasets, metrics, results, fairness audit)
4. Risk Register Mapping (IDs, scores, treatments)
5. Human Oversight Design (thresholds, UI, escalation)
6. Security & Threat Model (prompt-injection, egress controls)
7. PMM Plan (metrics, drift monitors, incident intake)
8. Change Log (releases, rollback tests, approvals)
9. Transparency Artifacts (user notice, explainability)
10. Evidence Index (links to logs, dashboards, snapshots)
  
Template — User-Facing AI Notice (short)
You’re interacting with an AI assistant. It may make mistakes.
Important decisions are reviewed by a human when needed.
Learn more: /kb/ai-assistant-transparency  |  Request human review: /support
  
Example — Model Card (excerpt)
Model: ZEN-Chat-21  |  Version: 1.4 (2025-09-02)
Purpose: Assist agents with policy answers; not for legal advice to end users.
Data: Internal policy corpus + licensed industry docs (see licenses in TD §3.1).
Key Metrics: Accuracy@Top1 78%, Hallucination 3.2%, Toxicity <0.5%, Fairness Δ ≤1.05.
Risks/Limitations: May misinterpret edge-case policies; mitigated via retrieval constraints + oversight.
  

Common pitfalls & mitigation

  • Static documents: treat docs as living artefacts; schedule reviews and link to change control.
  • Evidence gaps: keep export snapshots (PDF/CSV) with dates; auditors sample specific periods.
  • Unclear ownership: assign an owner per artefact and show on the Dossier Index.
  • Transparency ≠ marketing: notices must be candid about limitations and rights.

Implementation checklist

  • Technical documentation index created per AI system with owners and review dates.
  • User-facing transparency notices published and discoverable.
  • Logs/records retention policy implemented and enforced.
  • Model/System Cards completed; public vs internal versions defined.
  • Docs linked to Risk Register, Oversight design, and PMM dashboards.
  • Immutable snapshots archived for audits and surveillance.

© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 08 Nov 2025 • This page is general guidance, not legal advice.

    • Related Articles

    • Incident Management & Post-Market Monitoring (EU AI Act aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Incident Management & Post-Market Monitoring (EU AI Act aligned) ISO/IEC 42001 – AIMS Incident & PMM Process EU/UK aligned + On this page On this ...
    • Competence & Training Framework (roles, curricula, records, effectiveness)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 07 Nov 2025 www.zenaigovernance.com ↗ Competence & Training Framework (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Competence & Training EU/UK aligned + On this page On this page Overview & ...
    • Unified Risk Register Template (ISO + NIST + EU AI Act)

      Zen AI Governance — Knowledge Base • ISO/NIST/EU integration • Updated 13 Nov 2025 www.zenaigovernance.com ↗ Unified Risk Register Template (ISO + NIST + EU AI Act) ISO 42001 ↔ NIST AI RMF EU AI Act Alignment + On this page On this page Overview & ...
    • Human Oversight (EU/UK Aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Human Oversight (EU/UK aligned) ISO/IEC 42001 – AIMS Human Oversight EU/UK aligned + On this page On this page Overview & importance Objectives & ...
    • Certification Preparation & Audit Readiness Guide (EU/UK aligned)

      Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 08 Nov 2025 www.zenaigovernance.com ↗ Certification Preparation & Audit Readiness Guide (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Certification Readiness EU/UK aligned + On this page On ...