EU AI Act Compliance Checklist for CTOs and CIOs
Actionable compliance checklist for technology leaders — assess your AI systems, understand requirements, and build a roadmap to EU AI Act compliance before the 2026 deadline.
The EU AI Act (Regulation 2024/1689) is now law, and its obligations are phasing in on a strict timeline. As a CTO or CIO, you are the person most likely to own the technical side of compliance — and the one who will need to answer when the board asks, "Are we ready?"
This article provides a structured, actionable checklist to help you assess your organisation's position, identify gaps, and build a compliance roadmap. It is designed for technology leaders at companies that develop, deploy, or procure AI systems that touch the EU market.
Prohibited AI practices have been enforceable since February 2, 2025. General-purpose AI model obligations take effect August 2, 2025. Full enforcement for high-risk AI systems begins August 2, 2026. The time to act is now.
Phase 1: Discovery and Inventory
Before you can comply, you need to know what you have. Most organisations significantly underestimate the number of AI systems they operate.
Checklist: AI System Inventory
- Identify all AI systems across the organisation — not just the ones labelled "AI." The regulation's definition (Article 3(1)) is broad: any machine-based system designed to operate with varying levels of autonomy that generates outputs such as predictions, recommendations, decisions, or content.
- Include third-party AI — systems you procure, embed, or access via API. As a deployer, you have obligations even for systems you did not build.
- Map AI systems to business functions — HR, customer service, fraud detection, content moderation, supply chain, marketing, product features, internal tools.
- Document the purpose and scope of each system — what decisions it influences, what data it processes, who is affected by its outputs.
- Identify the provider for each system — is it built in-house, procured from a vendor, or open-source? Your obligations differ depending on your role in the value chain.
The definition of "AI system" under the regulation is intentionally broad. It encompasses machine learning models, rule-based expert systems, statistical approaches, and hybrid systems. When in doubt, include a system in your inventory — it is far better to over-classify and then exclude than to miss a system that turns out to be in scope.
Checklist: Role Identification
The AI Act assigns different obligations based on your role. You may hold multiple roles simultaneously:
- Provider (Article 3(3)) — you develop an AI system or have one developed on your behalf and place it on the market or put it into service under your own name or trademark
- Deployer (Article 3(4)) — you use an AI system under your authority (even if you did not build it)
- Importer — you place on the EU market an AI system from a provider established outside the EU
- Distributor — you make an AI system available on the EU market without being a provider or importer
- Authorised representative — you are mandated by a non-EU provider to act on their behalf
Phase 2: Risk Classification
The heart of the AI Act is its risk-based approach. Your obligations depend entirely on which risk category each AI system falls into.
Checklist: Prohibited Practices Screen (Article 5)
- Review each AI system against the eight prohibited practices in Article 5
- Flag any system that involves: subliminal or manipulative techniques, exploitation of vulnerable groups, social scoring, individual predictive policing, untargeted facial image scraping, emotion recognition in workplaces/education, biometric categorisation for sensitive attributes, or real-time remote biometric identification in public spaces
- For flagged systems: determine immediately whether the system must be discontinued, redesigned, or falls within a narrow exception
- Document your analysis for each system — including the rationale for why systems near the boundary are not prohibited
Checklist: High-Risk Classification (Articles 6 and Annex III)
- Check Annex I — does your AI system serve as a safety component of, or is it itself, a product covered by EU harmonisation legislation (medical devices, machinery, toys, vehicles, aviation, etc.)?
- Check Annex III — does your AI system fall into one of the eight high-risk areas?
- Biometric identification and categorisation
- Management and operation of critical infrastructure
- Education and vocational training (admissions, assessments)
- Employment, worker management, and access to self-employment (recruitment, task allocation, monitoring, evaluation)
- Access to essential private and public services (credit scoring, insurance pricing, emergency services)
- Law enforcement (risk assessment, polygraphs, evidence analysis)
- Migration, asylum, and border control
- Administration of justice and democratic processes
- Apply the exception in Article 6(3) — even if listed in Annex III, a system is not high-risk if it does not pose a significant risk of harm. Document this assessment carefully if you rely on it.
- Document the classification rationale for every AI system — this will be a key artefact in any regulatory inquiry
Checklist: Limited-Risk and Minimal-Risk Systems
- Identify systems with transparency obligations (Article 50) — chatbots, deepfakes, emotion recognition, biometric categorisation
- Classify remaining systems as minimal risk — voluntary codes of conduct encouraged but no mandatory obligations
- Document all classifications in your AI register
Automate your compliance documentation
Ctrl AI generates audit-ready execution traces and trust-tagged outputs for every AI decision, giving CTOs the evidence base that regulators expect.
Learn About Ctrl AIPhase 3: Gap Analysis for High-Risk Systems
If you have identified high-risk AI systems, this is where the substantive work begins. Articles 8 through 15 define the requirements.
Checklist: Data Governance (Article 10)
- Document training, validation, and testing datasets — including their provenance, relevance, representativeness, and any known gaps or biases
- Implement data quality criteria — examine data for errors, incompleteness, and biases before training and on an ongoing basis
- Ensure appropriate data governance — including data collection processes, labelling, cleaning, and enrichment operations
- Address bias in datasets — especially concerning protected characteristics (gender, ethnicity, age, disability)
- For personal data processing: confirm GDPR compliance, including legal basis, data minimisation, and purpose limitation
Checklist: Technical Documentation (Article 11)
- Prepare technical documentation before a system is placed on the market — this is not optional and must be kept up to date
- Include all elements specified in Annex IV: general description, development process, monitoring and control, risk management, changes, and standards applied
- Ensure documentation is comprehensive enough for authorities to assess compliance — vague or superficial documentation will not suffice
Checklist: Record-Keeping and Logging (Article 12)
- Implement automatic logging of events during the AI system's operation — the regulation requires traceability
- Ensure logs capture: the period of each use, the reference database against which input data was checked, the input data for which the search led to a match, and the identification of persons involved in human oversight
- Define retention periods for logs — at minimum for the period appropriate to the intended purpose of the system, and no less than six months unless otherwise specified
- Ensure logs are accessible to deployers and available to authorities upon request
Checklist: Transparency and Information (Article 13)
- Design systems to be transparent — deployers must be able to understand and use the system's output appropriately
- Provide clear instructions for use covering: intended purpose, level of accuracy and robustness, known limitations, foreseeable misuse risks, human oversight measures, and expected input data specifications
- Where applicable: inform individuals that they are subject to a decision made by a high-risk AI system
Checklist: Human Oversight (Article 14)
- Design systems for effective human oversight — this means real, meaningful oversight, not a rubber stamp
- Ensure the human overseer can: fully understand the system's capabilities and limitations, correctly interpret outputs, decide not to use the system or override its output, and intervene or halt the system
- Implement "human-on-the-loop" or "human-in-the-loop" mechanisms appropriate to the risk level and context
- Train the individuals responsible for human oversight — they must have the competence, training, and authority to exercise their role
Checklist: Accuracy, Robustness, and Cybersecurity (Article 15)
- Define and declare accuracy metrics — the system must achieve the level of accuracy appropriate to its intended purpose
- Test for robustness — the system should perform consistently under expected conditions and handle errors or inconsistencies gracefully
- Implement cybersecurity measures — protect against unauthorised third-party manipulation of training data, inputs, or the model itself (data poisoning, adversarial examples, model extraction)
- Document all testing results and include them in the technical documentation
Phase 4: Organisational Readiness
Technical compliance is necessary but not sufficient. You also need organisational structures and processes.
Checklist: Quality Management System (Article 16)
- Establish a quality management system proportionate to your organisation's size — documented policies and procedures for AI system development, deployment, and monitoring
- Include compliance strategy, resource allocation, and accountability in the QMS
- Implement a post-market monitoring system (Article 72) — systematic processes to collect and analyse data on the performance of your AI systems after deployment
- Define a serious incident reporting process (Article 73) — you must report to authorities within 15 days of becoming aware of a serious incident
Checklist: Conformity Assessment (Articles 43-44)
- Determine which conformity assessment procedure applies to each high-risk system — internal control (Annex VI) or third-party assessment (Annex VII)
- Identify whether a notified body is required — this is mandatory for certain biometric and critical infrastructure AI systems
- Prepare the EU declaration of conformity (Article 47) — a formal statement that the system meets all applicable requirements
- Affix the CE marking (Article 48) before placing the system on the market
- Register the system in the EU database (Article 49)
Checklist: Deployer Obligations (Article 26)
If you deploy (use) high-risk AI systems built by others:
- Use the system in accordance with the provider's instructions for use
- Assign human oversight to individuals with the necessary competence, training, and authority
- Ensure input data is relevant and sufficiently representative for the system's intended purpose
- Monitor the system's operation and report any serious incidents to the provider and relevant authority
- Conduct a fundamental rights impact assessment (Article 27) if you are a body governed by public law or a private entity providing public services
- Inform individuals that they are subject to a high-risk AI system decision, where required
Deployer obligations apply even if you purchased an AI system from a vendor that claims to be "EU AI Act compliant." Compliance is a shared responsibility. You cannot outsource your deployer obligations to your provider.
Phase 5: Timeline and Roadmap
Build a concrete roadmap with milestones aligned to the regulation's phased enforcement.
Phase 6: Ongoing Compliance
Compliance is not a one-time project. The AI Act requires continuous monitoring and adaptation.
Checklist: Continuous Obligations
- Monitor AI system performance post-deployment — track accuracy, bias, and reliability metrics over time
- Update technical documentation whenever the system is substantially modified (Article 43(4))
- Report serious incidents to the relevant market surveillance authority within the required timeframe
- Stay current with regulatory guidance — the AI Office, AI Board, and national authorities will issue implementing acts, standards, and codes of practice throughout 2025-2027
- Conduct periodic internal audits of your AI systems against the regulation's requirements
- Retrain your teams as the regulatory landscape evolves
- Maintain your AI register — keep it current as systems are added, modified, or retired
Organisations that treat AI Act compliance as an ongoing programme — not a one-time checkbox exercise — will find themselves better positioned not just for regulatory compliance, but for building AI systems that are genuinely trustworthy. The requirements for data governance, transparency, human oversight, and robustness are simply good engineering practices codified into law.
Common Pitfalls for Technology Leaders
Based on the regulation's requirements and early enforcement signals, here are the mistakes CTOs and CIOs should actively avoid:
Underestimating scope. The AI Act's definition of "AI system" is broader than many expect. If you only look at systems explicitly labelled as "AI" or "ML," you will miss rule-based systems, statistical models, and embedded AI in third-party tools.
Treating compliance as legal-only. AI Act compliance requires deep technical work — data governance, logging, testing, documentation. Legal teams cannot do this alone. Engineering leadership must be directly involved.
Ignoring deployer obligations. Many CTOs assume that if they buy an AI system from a compliant vendor, they are covered. They are not. Deployers have independent obligations for human oversight, monitoring, and input data quality.
Delaying action. With prohibited practices already enforceable and high-risk obligations taking effect in August 2026, organisations that have not started their compliance programme are already behind.
Overlooking GPAI model obligations. If your organisation provides or fine-tunes a general-purpose AI model, additional requirements under Articles 51-55 apply, with enforcement beginning August 2025.
Conclusion
EU AI Act compliance is a technical and organisational challenge that sits squarely in the CTO's and CIO's domain. The checklist above provides a structured path from discovery through ongoing compliance, aligned with the regulation's phased enforcement timeline.
The organisations that will navigate this transition most smoothly are those that start now, invest in systematic inventory and classification, and treat compliance not as a burden but as an opportunity to build AI systems that are transparent, robust, and worthy of trust.
The deadline is clear. The requirements are defined. The question is whether your organisation will be ready.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
Technical Documentation Requirements for AI Systems
What technical documentation is required under the EU AI Act — Annex IV requirements, risk management records, data governance documentation, and how to maintain compliance.
Conformity Assessment Under the EU AI Act
Guide to conformity assessment procedures for high-risk AI systems — internal control, third-party assessment, CE marking, and EU declaration of conformity explained.
Human Oversight Requirements Under the EU AI Act
Guide to Article 14 human oversight obligations — what deployers must implement, automation bias prevention, and the right to override AI decisions in high-risk systems.