The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 
 
 

EU AI Act Article 18, 19 and 20: Understanding Document Keeping, Automatically Generated Logs and…

DATE POSTED:August 11, 2025
EU AI Act Article 18, 19 and 20: Understanding Document Keeping, Automatically Generated Logs and Corrective actions and duty of information

European Union (EU) Artificial Intelligence (AI) Act (Regulation (EU) 2024/1689) represents the first most comprehensive regulation on AI. It proposes a proactive framework to regulate AI systems and minimise the risk of AI systems against to the health, safety, or fundamental rights of end users. Transparency and accountability are an important requirement for this framework, and it imposes specific obligations on providers of high-risk AI systems to ensure Document keeping (Article 18); automatically generated logs (Article 19) and corrective actions and duty of information (Article 20).

Grok

Conceptually, Article 18 and 19 create a framework for long-term traceability:

  • Documentation keeping (Article 18) emphasizes preserving static records of the AI system’s design, development, and certification processes. This acts as a “paper trail” for retrospective evaluation.
  • Automatically generated logs (Article 19) focuses on dynamic operational data, enabling real-time monitoring and post-deployment analysis to detect anomalies or ensure ongoing compliance.

Article 20 enforces a “fail-safe” mechanism:

  • It requires providers to act swiftly to rectify issues, preventing or minimizing potential harms to individuals, society, or fundamental rights (e.g., discrimination, safety risks).
  • It emphasizes collaboration and information sharing among stakeholders like deployers, distributors, authorities, and notified bodies, aligning with the Act’s ecosystem-wide accountability approach.
  • The provisions are triggered by the provider’s awareness or reasonable suspicion of issues, promoting self-regulation while enabling regulatory intervention.
  • Risks are defined broadly under Article 79(1), including health, safety, fundamental rights, or environmental impacts, with ties to enforcement mechanisms like market surveillance.

Article 18: Documentation Keeping

This article mandates that providers maintain comprehensive technical and compliance-related documentation for an extended period after the AI system enters the market. The goal is to facilitate regulatory inspections, ensure the system’s lifecycle is auditable in case of disputes or harms. It underscores the EU’s emphasis on “traceable AI” by requiring records that detail how the system was built, tested, and certified. For providers facing dissolution (e.g., bankruptcy), Member States must establish mechanisms to preserve access, preventing loss of critical information. Special rules apply to financial institutions to align with their sector-specific governance obligations, promoting efficiency.

Key Elements:

  • Scope and Applicability: Applies to providers of high-risk AI systems. Documentation must be kept “at the disposal” of national competent authorities, meaning it should be readily accessible upon request but not automatically submitted.
  • Retention Period: 10 years from the date the high-risk AI system is placed on the market or put into service. This long duration reflects the potential long-term impacts of AI systems.
  • Documents to Retain: Providers must keep specific categories of documentation. The table below lists them with cross-references to related articles in the Act.
  • Contingency for Provider Dissolution: If a provider or its authorized representative goes bankrupt or ceases operations before the 10-year period ends, each EU Member State must define conditions (e.g., transfer to a custodian or digital archiving) to ensure the documentation remains available to authorities.
  • Special Provisions for Financial Institutions: If the provider is a financial institution (e.g., banks regulated under EU laws like CRD IV or MiFID II), the technical documentation must be integrated into their existing internal governance records, avoiding duplicate storage systems.

Article 19: Automatically Generated Logs

This article requires providers to retain operational logs automatically produced by high-risk AI systems, focusing on data that captures the system’s real-world performance. Logs serve as a “black box recorder” for AI, helping to reconstruct events, identify biases or failures, and demonstrate compliance during audits. The retention period is flexible but has a minimum threshold, balancing utility with data minimization principles (especially under GDPR). Providers only need to keep logs “under their control,” acknowledging that some data might be managed by deployers or third parties. Like Article 18, it includes adaptations for financial institutions to harmonize with sector regulations.

Key Elements:

  • Scope and Applicability: Applies to providers of high-risk AI systems. Logs must be those automatically generated (not manually created) and only to the extent the provider controls them (e.g., if a deployer manages the system post-sale, the provider’s obligation may be limited).
  • Type of Logs: Refers specifically to logs outlined in Article 12(1), which include data on inputs, outputs, decisions, errors, and performance metrics to enable monitoring and traceability.
  • Retention Period: Must be “appropriate to the intended purpose” of the AI system, with a minimum of 6 months. This can be extended or shortened by other Union or national laws, particularly those on personal data protection (e.g., GDPR’s storage limitation principle). The flexibility allows tailoring to risk levels — e.g., longer for AI in healthcare than in HR screening.
  • Special Provisions for Financial Institutions: Logs must be maintained as part of the institution’s existing documentation under Union financial services law (e.g., integrating with audit trails required by Basel III or PSD2).

Article 20(1): Corrective Actions for Non-Conformity

This subsection mandates immediate remedial steps when a provider identifies or suspects that their high-risk AI system does not comply with the Regulation’s requirements (e.g., risk management under Article 9, data governance under Article 10, or transparency under Article 13). The goal is to restore conformity or remove the system from circulation, minimizing exposure. It also imposes a duty to notify downstream parties, ensuring the supply chain is informed to halt further distribution or use. This reflects a precautionary principle, prioritizing public protection over commercial interests.

Key Elements:

  • Trigger: Applies when the provider “considers or has reason to consider” non-conformity. This is a low threshold — based on internal assessments, user reports, or audits — encouraging vigilance without requiring proven harm.
  • Actions Required: Providers must take “necessary corrective actions” immediately. Options are graduated based on severity, allowing flexibility.
  • Notification Duty: Inform relevant parties to enable coordinated response.
  • Scope: Limited to providers of high-risk AI systems already on the market or in service.

The table below enumerates the specific corrective options and notification requirements:

Article 20(2): Investigation and Information for Risks

This builds on 20(1) by addressing scenarios where the non-conformity presents a “risk” as defined in Article 79(1), which could involve serious adverse effects. Providers must investigate root causes collaboratively and report to authorities, facilitating oversight and potential EU-wide alerts. This duty enhances post-market surveillance (linked to Article 72), ensuring risks are not siloed but shared for systemic improvements. It also involves notified bodies if certification was involved, closing the loop on pre-market approvals under Article 44.

Key Elements:

  • Trigger: The system presents a risk under Article 79(1), and the provider becomes aware (e.g., via monitoring, complaints, or logs from Article 19).
  • Investigation: Immediate, in collaboration with the reporting deployer if applicable, to identify causes like design flaws, data biases, or misuse.
  • Notification Duty: Inform competent authorities and, where relevant, the certifying notified body. Details must include the non-compliance nature and corrective actions taken.
  • Scope: Complements 20(1) but focuses on higher-stakes risks; aligns with broader reporting obligations in the Act.

The table below outlines the process steps and stakeholders:

In extending the content from Articles 18 and 19, Article 20 integrates record-keeping (e.g., logs for investigations) into active risk mitigation, creating a continuous compliance cycle. Providers should establish internal protocols for monitoring, escalation, and documentation of these actions to demonstrate due diligence. For instance, logs retained under Article 19 could be a key evidence in investigations here. If your high-risk AI system involves specific sectors (e.g., biometrics or education), cross-check with Annex III for tailored risks. Further context from related articles like 79 (risk definitions) or 44 (certification) can deepen understanding.

EU AI Act Article 18, 19 and 20: Understanding Document Keeping, Automatically Generated Logs and… was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.