2Logging & Traceability Requirements — Data Capture, Retrieval & Evidence Linkage
Logging & Traceability Requirements — Data Capture, Retrieval & Evidence Linkage
EU AI Act Compliance Traceability & Audit Logs
+ On this page
Key takeaways
- Every AI system must retain logs that enable complete traceability of data, models, and decisions.
- Logs must be structured, tamper-resistant, timestamped, and linked to evidence records (EV-IDs).
- Retention periods and retrieval formats must support audit requests from EU or UK regulators.
Purpose & objectives
Logging and traceability ensure that the behaviour of AI systems can be explained, audited, and reproduced. They provide the forensic foundation for incident response, bias analysis, and post-market reviews.
Logging architecture overview
- Data Ingestion Logs (EV-DIL): Capture source, timestamp, checksum, and DPIA link for every data batch.
- Model Version Ledger (EV-MVL): Records model ID, hash, training code commit, hyperparameters, and validation score.
- Decision Trace Logs (EV-DTL): Stores inputs, outputs, model version, confidence scores, and human override flag.
- System Audit Trail (EV-SAT): Tracks configuration changes, API calls, and administrative access events.
Dataset & training traceability
| Trace Element | Description | Evidence Link |
|---|
| Dataset version | Dataset ID, composition, licence | EV-DAT-#### |
| Data source | Original provider, lawful basis, collection date | EV-DPIA-#### |
| Bias report | Pre/post training fairness audit | EV-FAI-#### |
| Training run | Script hash, library version, seed value | EV-TRN-#### |
Model versioning & evidence linkage
- Each model deployment is assigned a globally unique Model ID and commit hash.
- Version records include parent model, training dataset version, and evaluation summary.
- All evaluation reports link to Evidence Index entries in the TDF.
- Model lineage chain diagram maintained to show evolution across releases.
Runtime logging & observability
- Real-time capture of input/output pairs with user ID (pseudonymised) and session timestamp.
- Confidence scores and decision rationale saved to structured JSON logs.
- Log aggregation through central SIEM (Google Chronicle / Splunk) for alerting.
- Monthly sampling review to detect drift and bias shift.
Security & data protection
- Logs encrypted at rest (AES-256) and in transit (TLS 1.3).
- Access via RBAC with least-privilege principle; admin actions double-logged.
- Retention policy based on risk level (5–10 years for high-risk systems).
- Integrity verified using SHA-256 hashes and append-only storage.
Retrieval & audit export
- Standard export format: JSON Lines + CSV summary.
- Each record includes link to corresponding EV-ID and TDF section.
- Audit API enables filtered exports (e.g., date range, user segment, incident type).
- Retrieval requests tracked in access log with DPO approval stamp.
Templates & schemas
A) Log Record Schema (JSON)
{
"log_id": "EV-DTL-2025-0456",
"timestamp": "2025-11-17T12:34:56Z",
"model_id": "M-CLF-01-2025",
"input_hash": "sha256:ab34...",
"output_summary": "decision=approve;confidence=0.92",
"user_id": "anon-9482",
"oversight_flag": false,
"risk_level": "medium",
"linked_evidence": ["EV-TRN-052", "EV-RMS-011"]
}
B) Log Retention Register (CSV headers)
System_ID,Log_Type,Storage_Location,Encryption,Retention_Years,Owner,Next_Review,EV_ID
Framework alignment
| Framework | Reference | Relevance |
|---|
| EU AI Act | Article 12 & Annex IV §2.8 | Defines logging and traceability requirements for high-risk AI. |
| ISO/IEC 42001 | §9.1 & §10.2 | Monitoring, measurement and non-conformity record-keeping. |
| NIST AI RMF | Measure & Manage | Operational traceability and record integrity controls. |
| UK DSIT Framework | Principle 5 | Accountability & auditability for AI decisions. |
Implementation checklist
- Logging architecture documented and approved by CISO & Compliance Lead.
- EV-ID cross-linking enabled across data, model, decision, and incident logs.
- Audit API deployed with export filtering and retention alerts.
- Monthly integrity verification reports stored in Evidence Repository.
- Traceability tested end-to-end during ISO 42001 internal audit.
© Zen AI Governance UK Ltd • Regulatory Knowledge • v1 17 Nov 2025 • This page is general guidance, not legal advice.
Related Articles
Internal Audit & Evidence Management (EU/UK aligned)
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 07 Nov 2025 www.zenaigovernance.com ↗ Internal Audit & Evidence Management (ISO/IEC 42001:2023) ISO/IEC 42001 – AIMS Internal Audit Evidence Management + On this page On this page ...
AI Audit & Evidence Management Policy
Zen AI Governance — Knowledge Base • EU/UK alignment • Updated 10 Nov 2025 www.zenaigovernance.com ↗ AI Audit & Evidence Management Policy ISO/IEC 42001 – AIMS Governance & Compliance EU/UK aligned + On this page On this page Overview & scope ...
Master AI Policy — Purpose, Roles, Requirements & Enforcement
Zen AI Governance — Knowledge Base • Organisational Policy • Updated 14 Nov 2025 www.zenaigovernance.com ↗ Master AI Policy — Purpose, Roles, Requirements & Enforcement Governance & Policies EU/UK Aligned + On this page On this page Purpose & ...
Obligations for High-Risk AI Systems — Lifecycle Overview & Requirements
Zen AI Governance — Knowledge Base • EU AI Act Compliance • Updated 17 Nov 2025 www.zenaigovernance.com ↗ Obligations for High-Risk AI Systems (EU/UK Aligned) EU AI Act Compliance High-Risk Systems + On this page On this page Classification & ...
Evidence Index Structure (SharePoint / Drive / Confluence)
Zen AI Governance — Knowledge Base • Templates & Toolkits • Updated 20 Nov 2025 www.zenaigovernance.com ↗ Evidence Index Structure — AIMS / ISO 42001 / EU AI Act Evidence Repository Template ISO 42001 / EU AI Act Alignment + On this page On this page ...