Compliance

EU AI Act Compliance Checklist

A comprehensive EU AI Act compliance checklist covering all phases from AI system inventory to conformity assessment. Prepare for the August 2026 enforcement deadline with actionable steps.

12 min readUpdated 2025-07-20
EU AI Act Compliance Checklist
EU AI Act Resource

Download Free EU AI Act Compliance Checklist

Get a step-by-step checklist to prepare your AI systems for compliance.

What Is an EU AI Act Compliance Checklist?

An EU AI Act compliance checklist is a structured, step-by-step document that organizations use to systematically verify their AI systems meet every obligation set out in Regulation (EU) 2024/1689 — the European Union Artificial Intelligence Act. Rather than reading through over 400 pages of legal text and attempting to extract requirements ad hoc, a compliance checklist translates the regulation into concrete, verifiable actions organized by phase and priority.

The EU AI Act is the world's first binding legal framework dedicated entirely to artificial intelligence. It introduces a risk-based classification system that categorizes AI systems into four tiers: prohibited, high-risk, limited risk, and minimal risk. Each tier carries different obligations, but the bulk of the regulatory burden falls on providers and deployers of high-risk AI systems.

A well-constructed EU AI Act compliance checklist typically covers six core phases: inventorying your AI systems, classifying them by risk level, screening for prohibited practices under Article 5, implementing the detailed high-risk requirements found in Articles 8 through 15, preparing the technical documentation mandated by Article 11 and Annex IV, and readying your organization for conformity assessment.

Without a compliance checklist, organizations risk overlooking critical requirements. The regulation contains 35 individual obligations for high-risk systems alone, spread across multiple articles and annexes. A checklist consolidates these into a single actionable document that compliance officers, legal teams, and engineering departments can work through together. It transforms a complex regulatory challenge into a manageable project plan with clear milestones and deliverables.

Why You Need a Compliance Checklist Before August 2026

The high-risk AI system requirements under the EU AI Act become enforceable on August 2, 2026. Organizations that deploy or provide high-risk AI systems without proper compliance documentation face severe financial penalties: up to €35 million or 7% of global annual turnover for violations involving prohibited AI practices, and up to €15 million or 3% of global turnover for failing to meet high-risk system obligations.

These are not hypothetical figures. The EU AI Act establishes the European AI Office and empowers national market surveillance authorities to conduct inspections, request documentation, and issue sanctions. The enforcement infrastructure is already being built. National competent authorities across EU member states are staffing up and developing their supervisory frameworks.

The compliance timeline is tighter than it appears. While August 2026 seems distant, achieving full compliance for a high-risk AI system requires months of sustained effort. Consider what is involved: cataloguing every AI system in your organization, determining which fall under the regulation's scope, classifying each by risk level, implementing technical requirements for data governance, risk management, transparency, and human oversight, producing detailed technical documentation covering all 12 sections of Annex IV, and preparing for third-party conformity assessment where applicable.

Organizations that start their EU AI Act compliance checklist process early gain several advantages. They have time to address gaps without rushing. They can allocate budgets across fiscal periods rather than absorbing the full cost in a single quarter. They can train staff incrementally. And they can engage with notified bodies for conformity assessment before appointment backlogs develop closer to the deadline.

The cost of non-compliance extends beyond fines. The EU AI Act also empowers authorities to order the withdrawal of non-compliant AI systems from the market and to require providers to take corrective action. For organizations whose products or services depend on AI, a forced market withdrawal represents an existential business risk that dwarfs the financial penalties.

Starting your EU AI Act compliance checklist now is not just prudent regulatory planning — it is a strategic business decision. Organizations that achieve compliance early can market their AI systems as regulation-ready, creating competitive advantage in a market where buyers are increasingly asking about AI governance credentials.

EU AI Act Resource

Automate Your Compliance Documentation

AuditDraft generates Article 11 compliant model cards and tracks all 35 high-risk requirements.

Complete EU AI Act Compliance Checklist

Below is a comprehensive EU AI Act compliance checklist organized into six phases. Work through each phase sequentially, as later phases build on the outputs of earlier ones.

Phase 1: AI System Inventory

  • [ ] Identify all AI systems developed, deployed, or procured by your organization
  • [ ] Document the intended purpose and operational context for each system
  • [ ] Record the provider, deployer, and any third-party components involved
  • [ ] Map data flows including training data sources and operational inputs
  • [ ] Identify the EU market(s) where each system is placed or put into service
  • [ ] Assign an internal owner or responsible person for each AI system

Phase 2: Risk Classification Under Article 6

  • [ ] Review Article 6 and Annex III to determine high-risk classification criteria
  • [ ] Assess whether each system falls within a regulated product category (Annex I)
  • [ ] Evaluate whether each system is used in a high-risk area listed in Annex III (biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)
  • [ ] Document your classification rationale with supporting evidence
  • [ ] Identify any systems that qualify for the limited-risk exception under Article 6(3)
  • [ ] Have the classification reviewed by legal counsel or a qualified compliance professional

Phase 3: Article 5 Prohibition Screening

  • [ ] Screen for subliminal manipulation techniques that cause harm (Article 5(1)(a))
  • [ ] Screen for exploitation of vulnerabilities due to age, disability, or social situation (Article 5(1)(b))
  • [ ] Screen for social scoring by public authorities (Article 5(1)(c))
  • [ ] Screen for individual criminal offense risk assessment based solely on profiling (Article 5(1)(d))
  • [ ] Screen for untargeted scraping of facial images for recognition databases (Article 5(1)(e))
  • [ ] Screen for emotion recognition in workplaces and education except for safety/medical reasons (Article 5(1)(f))
  • [ ] Screen for biometric categorization to infer sensitive attributes (Article 5(1)(g))
  • [ ] Document screening results and maintain evidence of non-applicability

Phase 4: High-Risk System Compliance (Articles 8–15)

  • [ ] Risk Management (Article 9): Establish a risk management system that operates throughout the AI system's lifecycle
  • [ ] Risk Management: Identify and analyze known and foreseeable risks, estimate and evaluate residual risks
  • [ ] Risk Management: Implement risk mitigation measures and test their effectiveness
  • [ ] Data Governance (Article 10): Ensure training, validation, and testing datasets meet quality criteria
  • [ ] Data Governance: Document data collection processes, preparation methods, and any assumptions made
  • [ ] Data Governance: Address potential biases in datasets, especially when processing special categories of personal data
  • [ ] Technical Documentation (Article 11): Prepare documentation covering all 12 sections of Annex IV
  • [ ] Record-Keeping (Article 12): Implement automatic logging of system events
  • [ ] Record-Keeping: Ensure logs capture operation periods, reference databases, input data, and identification of involved natural persons
  • [ ] Transparency (Article 13): Design the system to enable deployers to interpret outputs
  • [ ] Transparency: Provide clear instructions for use including system capabilities and limitations
  • [ ] Human Oversight (Article 14): Build in measures allowing human oversight during operation
  • [ ] Human Oversight: Enable human operators to understand system capabilities and recognize automation bias
  • [ ] Human Oversight: Implement the ability for operators to override, reverse, or stop the system
  • [ ] Accuracy, Robustness, Cybersecurity (Article 15): Achieve and declare appropriate levels of accuracy
  • [ ] Accuracy, Robustness, Cybersecurity: Ensure resilience against errors, faults, and inconsistencies
  • [ ] Accuracy, Robustness, Cybersecurity: Protect against unauthorized third-party manipulation

Phase 5: Technical Documentation (Annex IV)

  • [ ] General description of the AI system and its intended purpose
  • [ ] Detailed description of system development, including design specifications and architecture
  • [ ] Information about monitoring, functioning, and control mechanisms
  • [ ] Description of the risk management system
  • [ ] Description of changes made throughout the system lifecycle
  • [ ] Performance metrics and accuracy levels declared
  • [ ] Description of training, validation, and testing data and methodologies
  • [ ] Information about cybersecurity measures
  • [ ] Description of computational resources used for development and operation
  • [ ] Description of the quality management system in place
  • [ ] Detailed instructions for use provided to deployers
  • [ ] Information about foreseeable misuse and additional risks

Phase 6: Conformity Assessment Preparation

  • [ ] Determine whether self-assessment or third-party assessment is required (Article 43)
  • [ ] If third-party assessment: identify and engage a notified body
  • [ ] Prepare the quality management system documentation (Article 17)
  • [ ] Compile all technical documentation for review
  • [ ] Prepare the EU declaration of conformity (Article 47)
  • [ ] Plan for CE marking affixation (Article 48)
  • [ ] Establish post-market monitoring procedures (Article 72)
  • [ ] Define and document your serious incident reporting process (Article 73)

Common Compliance Mistakes to Avoid

Even organizations that take EU AI Act compliance seriously can stumble on common pitfalls. Identifying these mistakes early saves time, resources, and potential regulatory exposure.

1. Misclassifying the risk level. This is the single most consequential error an organization can make. Classifying a high-risk system as limited or minimal risk means you skip the entire Articles 8–15 compliance track, leaving your organization exposed to the full penalty regime. The classification criteria in Article 6 and Annex III require careful analysis of the system's intended purpose and operational context, not just its technical characteristics. When in doubt, classify upward.

2. Treating Annex IV documentation as a one-time task. Article 11 requires technical documentation to be drawn up before the system is placed on the market and kept up to date throughout its lifecycle. Many organizations produce initial documentation during development but fail to establish processes for updating it when the system is retrained, fine-tuned, or its operational context changes. The documentation must reflect the system as it exists today, not as it existed at launch.

3. Overlooking deployer obligations. Organizations that procure and deploy AI systems built by third parties are not exempt from the EU AI Act. Deployers of high-risk systems have specific obligations under Articles 26 and 27, including conducting fundamental rights impact assessments, using the system according to its instructions for use, and monitoring operations. Many organizations assume compliance is solely the provider's responsibility.

4. Ignoring the human oversight requirement. Article 14 is among the most technically challenging requirements. It mandates that high-risk AI systems include measures allowing human oversight, and that operators be able to understand, monitor, and intervene in system behavior. Simply adding a manual override button is insufficient. The regulation requires that operators can genuinely interpret the system's output and recognize when the system is not performing as intended.

5. Failing to document bias testing and mitigation. Article 10 requires data governance measures that address potential biases in training and testing data. Organizations that cannot demonstrate they have assessed their datasets for bias and taken steps to mitigate identified issues will face difficulties during conformity assessment. This is particularly critical for systems processing special categories of personal data under Article 10(5).

6. Not establishing a post-market monitoring plan. Article 72 requires providers of high-risk AI systems to establish a post-market monitoring system. This is not optional, and it must be proportionate to the nature of the system and its risks. Organizations that focus exclusively on pre-market compliance overlook this ongoing obligation, creating a compliance gap that widens over time.

Free Template

Download: EU AI Act Compliance Checklist

Download a printable version of the complete EU AI Act compliance checklist covering all six phases.

Tools and Templates for EU AI Act Compliance

Completing your EU AI Act compliance checklist requires a combination of legal analysis, technical assessment, and documentation. Organizations have two primary approaches: manual compliance and software-assisted compliance.

Manual compliance involves working through the regulation's requirements using spreadsheets, word processors, and internal project management tools. This approach works for organizations with a small number of AI systems and access to legal expertise, but it scales poorly. Each high-risk system requires documentation across all 12 Annex IV sections, ongoing risk management records, data governance documentation, and post-market monitoring reports. For organizations with multiple systems, the documentation burden becomes substantial.

Software-assisted compliance uses specialized platforms designed for the EU AI Act to streamline the process. Key capabilities to look for include risk classification wizards that walk you through Article 6 criteria, template-based documentation that covers all Annex IV requirements, compliance tracking dashboards that monitor your progress against Articles 8–15, and export functionality that produces audit-ready documents.

AuditDraft is purpose-built for EU AI Act compliance. The platform includes an Article 6 risk classification wizard that determines your system's risk level through 8 criteria-based questions derived directly from the regulation. It generates Annex IV-compliant model cards with AI-assisted drafting and quality scoring to ensure no required section is left incomplete. The compliance tracker monitors all 35 individual high-risk requirements, allowing teams to assign owners, attach evidence, and track progress in real time.

For organizations beginning their compliance journey, the most practical approach is to use the checklist in the previous section as your roadmap and leverage software to handle the documentation-intensive phases. Whether you choose manual methods or a compliance platform, the critical factor is starting before the August 2026 deadline creates time pressure that compromises the quality of your compliance work.

Frequently Asked Questions

Common Questions About EU AI Act Compliance Checklist

What is the EU AI Act compliance checklist?

An EU AI Act compliance checklist is a structured document that guides organizations through every obligation in Regulation (EU) 2024/1689. It covers AI system inventory, risk classification under Article 6, prohibition screening under Article 5, high-risk requirements from Articles 8-15, Annex IV technical documentation, and conformity assessment preparation.

How do I classify my AI system's risk level under the EU AI Act?

Risk classification is determined by your AI system's intended purpose and operational context, following Article 6 and Annex III criteria. Systems used in areas like biometric identification, critical infrastructure, employment, education, law enforcement, or migration are typically classified as high-risk. AuditDraft's classification wizard guides you through the official criteria in 8 questions.

What are the EU AI Act compliance deadlines?

The EU AI Act has a phased enforcement timeline. Prohibited practices became enforceable in February 2025. General-purpose AI model rules apply from August 2025. High-risk AI system requirements take effect August 2, 2026. Full enforcement, including all remaining provisions, begins February 2027.

What are the penalties for failing the EU AI Act compliance checklist?

Penalties are tiered by violation severity: up to €35 million or 7% of global annual turnover for prohibited AI practices, up to €15 million or 3% for high-risk system violations, and up to €7.5 million or 1.5% for providing incorrect information to authorities. Authorities can also order the withdrawal of non-compliant systems from the market.

Do I need a compliance checklist if my AI system is low-risk?

While minimal-risk AI systems face fewer obligations, all AI systems placed on the EU market must comply with certain transparency requirements. A compliance checklist helps you verify classification, screen for prohibited practices, and document your assessment. Even if your system is minimal-risk, having documentation of your risk assessment protects against future reclassification challenges.

EU AI Act Resource

Start Documenting Your AI Systems Today

High-risk AI system requirements take effect August 2, 2026. Begin your compliance journey now.

Stay Informed

EU AI Act Updates

Get the latest compliance guidance and regulatory updates delivered to your inbox.