EU AI Act Compliance Guide
EU AI Act Compliance Guide: 10 Steps to Get Ready
Compliance with the EU AI Act (Regulation 2024/1689) is not a single action — it is a structured programme that touches technology, governance, legal, and operational functions across your organisation. This guide walks through ten essential steps to build a comprehensive compliance programme, from initial discovery through ongoing monitoring.
The obligations are phasing in now. Prohibited practices and AI literacy requirements have been enforceable since February 2, 2025. General-purpose AI model obligations take effect in August 2025. Full enforcement of high-risk system requirements begins August 2, 2026. Regardless of where you are in the process, the time to start is now.
This guide is designed for organisations that develop AI systems (providers), use AI systems (deployers), or both. Your specific obligations depend on your role in the AI value chain — but every organisation using AI in a professional capacity has at least some obligations under the AI Act.
Step 1: Map Your AI Systems
You cannot comply with a regulation if you do not know what it applies to. The first step is creating a comprehensive inventory of every AI system in your organisation.
What Counts as an AI System
The AI Act's definition (Article 3(1)) is broad: a machine-based system designed to operate with varying levels of autonomy that, for explicit or implicit objectives, infers from input how to generate outputs such as predictions, content, recommendations, or decisions. This captures:
- Machine learning models (supervised, unsupervised, reinforcement learning)
- Deep learning systems (neural networks, transformers, diffusion models)
- Statistical and Bayesian approaches used for inference
- Logic-based and knowledge-based systems
- Hybrid systems combining multiple approaches
How to Conduct an AI Inventory
Survey every department. AI systems are often adopted departmentally — marketing, HR, finance, customer service, product, operations. Each department may have AI tools that central IT is unaware of.
Include third-party AI. Systems you access via API, embed in your products, or use through SaaS platforms are in scope. As a deployer, you have obligations even for AI systems you did not build.
Check embedded AI. Many software products contain AI components that are not prominently labelled. Your CRM, ERP, or productivity suite may include AI features that fall within scope.
Document each system. For every AI system identified, record: its name and provider, its purpose and function, what data it processes, what outputs it generates, who is affected by those outputs, and which department uses it.
Most organisations significantly underestimate the number of AI systems they operate. A thorough inventory typically reveals two to five times more AI systems than initially expected. Be systematic and inclusive — it is far better to over-include and then narrow down than to miss a system that turns out to be high-risk.
Step 2: Classify Risk Levels
With your inventory complete, classify each AI system according to the AI Act's risk categories.
Screen for Prohibited Practices (Article 5)
First, check whether any of your AI systems fall under the eight prohibited practices. These include subliminal manipulation, exploitation of vulnerable groups, social scoring, individual predictive policing, untargeted facial image scraping, workplace emotion recognition (with exceptions), biometric categorisation for sensitive attributes, and real-time remote biometric identification in public spaces (with narrow exceptions).
If any system is prohibited, it must be discontinued or fundamentally redesigned. There is no transition period — prohibited practices have been enforceable since February 2, 2025.
Assess for High-Risk Classification (Articles 6, Annex I, Annex III)
Check each system against two pathways to high-risk classification:
Annex I pathway — Is the AI system a safety component of, or is it itself, a product regulated under EU harmonisation legislation? This includes medical devices, machinery, toys, lifts, equipment for explosive atmospheres, radio equipment, pressure equipment, recreational craft, cableway installations, appliances burning gaseous fuels, personal protective equipment, vehicles, civil aviation, railway systems, marine equipment, and interoperability of the rail system.
Annex III pathway — Does the AI system fall into one of the eight high-risk domains? Biometric identification, critical infrastructure, education, employment, essential services (including credit scoring and insurance), law enforcement, migration and border control, or administration of justice.
Apply the exception — Under Article 6(3), an AI system listed in Annex III is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. This exception is narrow and must be carefully documented.
Identify Transparency Obligations (Article 50)
Certain systems carry transparency requirements regardless of risk level: chatbots, deepfake generators, emotion recognition systems, and AI-generated content systems.
Classify Remaining Systems as Minimal Risk
Systems that do not fall into any of the above categories are minimal risk. No mandatory obligations apply, though voluntary codes of conduct are encouraged.
Step 3: Identify Your Role
Your obligations under the AI Act depend on which role you play for each AI system.
Provider — You develop an AI system (or have it developed on your behalf) and place it on the market or put it into service under your own name or trademark. Providers bear the heaviest obligations, including conformity assessment, technical documentation, and post-market monitoring.
Deployer — You use an AI system under your authority. Deployers must use systems according to instructions, ensure human oversight, monitor performance, and report incidents.
Importer — You place an AI system from a non-EU provider on the EU market. Importers must verify that the provider has completed conformity assessment and prepared required documentation.
Distributor — You make an AI system available on the EU market without being a provider or importer. Distributors must verify that the system bears the required CE marking and is accompanied by required documentation.
Note: You can be both a provider and a deployer simultaneously. For instance, if you build an AI system for internal use, you are the provider (you developed it) and the deployer (you use it). If you build an AI system and sell it, you are the provider. If a customer then uses it, they are the deployer.
If you substantially modify an AI system that was built by another provider — beyond what the original provider intended — you may become the new provider of that system under Article 25. This carries the full set of provider obligations, including conformity assessment.
Step 4: Conduct Gap Analysis
For each high-risk AI system, compare your current practices against the AI Act's requirements (Articles 8-15). Identify where you already comply and where gaps exist.
Key Areas to Assess
Risk management (Article 9) — Do you have a systematic risk management process for each high-risk AI system? Is it documented, iterative, and maintained throughout the system's lifecycle?
Data governance (Article 10) — Have you documented your training, validation, and testing datasets? Have you assessed them for biases? Are data quality controls in place?
Technical documentation (Article 11) — Do you have comprehensive technical documentation that meets the requirements of Annex IV? Is it kept up to date?
Logging (Article 12) — Do your AI systems automatically log their operations? Do logs capture the information required by the regulation?
Transparency (Article 13) — Can deployers understand and appropriately use your system's outputs? Have you provided adequate instructions for use?
Human oversight (Article 14) — Is the system designed for effective human oversight? Can humans understand, interpret, override, and halt the system?
Accuracy and robustness (Article 15) — Have you declared accuracy levels? Is the system robust against errors and adversarial manipulation?
Document each gap with its severity, the effort required to close it, and a proposed timeline.
Automate your compliance documentation
Ctrl AI generates execution traces, trust-tagged outputs, and audit-ready documentation for every AI decision — closing the gap between your current practices and what the EU AI Act requires.
Learn About Ctrl AIStep 5: Implement Required Controls
Based on your gap analysis, implement the necessary technical and organisational controls.
Risk Management System (Article 9)
Establish a risk management system that:
- Identifies and analyses known and foreseeable risks to health, safety, and fundamental rights
- Estimates and evaluates risks under both intended use and reasonably foreseeable misuse
- Adopts appropriate management measures — technical design choices, training processes, testing protocols, operational constraints
- Tests the system with appropriate data to identify the most effective risk mitigation measures
- Documents everything and iterates as new risks emerge or circumstances change
Data Governance (Article 10)
Implement data governance practices covering:
- Training, validation, and testing data must be subject to appropriate governance and management practices
- Data collection processes, data sources, and the original purpose of data collection must be documented
- Data must be examined for biases, errors, gaps, and relevance
- Where personal data is processed, GDPR requirements must be met concurrently
- Bias detection and mitigation must be performed, with special attention to protected characteristics
Technical Documentation (Article 11)
Prepare technical documentation containing all elements specified in Annex IV:
- General description of the AI system
- Detailed description of the elements and development process
- Monitoring, functioning, and control of the system
- Information about training, testing, and validation data
- Performance metrics and accuracy information
- Description of risk management measures
- Changes made throughout the system's lifecycle
Logging Capabilities (Article 12)
Implement automatic logging that captures:
- The period and duration of each use
- Input data or reference databases used
- Results and outputs generated
- Identification of persons involved in human oversight
- Any anomalies or incidents during operation
Define appropriate retention periods — at minimum six months unless otherwise specified.
Transparency Provisions (Article 13)
Design the system to be sufficiently transparent and provide instructions for use covering:
- The system's intended purpose and the decisions it supports
- Accuracy levels, including known variations across population groups
- Known limitations and foreseeable misuse scenarios
- Human oversight measures and how to exercise them
- Input data specifications and how data quality affects outputs
Human Oversight Design (Article 14)
Implement human oversight mechanisms appropriate to the system's risk level:
- Human-in-the-loop (human approves each decision)
- Human-on-the-loop (human monitors and can intervene)
- Human-in-command (human can override or shut down the system)
Ensure that overseer interfaces are designed for comprehension, not just notification. The human must genuinely understand the system's output and have the practical ability to act on it.
Step 6: Prepare for Conformity Assessment
High-risk AI systems must undergo conformity assessment before being placed on the market or put into service.
Internal Conformity Assessment (Annex VI)
Most high-risk AI systems can be assessed through internal conformity assessment, where the provider evaluates compliance with the regulation's requirements using their own quality management system. This requires:
- A quality management system meeting Article 17 requirements
- Technical documentation meeting Annex IV requirements
- Documented evidence of compliance with each applicable requirement (Articles 8-15)
- EU Declaration of Conformity (Article 47)
Third-Party Conformity Assessment (Annex VII)
Certain high-risk AI systems — specifically biometric identification systems and critical infrastructure AI — require assessment by a notified body (an independent third-party assessment organisation designated by a member state). This involves:
- Submitting an application to a notified body
- Providing access to technical documentation, quality management system, and the AI system itself
- The notified body examines the documentation, conducts tests, and issues a certificate if requirements are met
CE Marking and Registration
After successful conformity assessment:
- Affix the CE marking to the AI system (Article 48)
- Register the system in the EU database (Article 49)
- Prepare and sign the EU Declaration of Conformity
Step 7: Set Up Monitoring and Post-Market Surveillance
Compliance does not end at market entry. Providers must establish a post-market monitoring system (Article 72).
Post-Market Monitoring System
Design and document a system to:
- Actively and systematically collect, analyse, and evaluate data on performance and compliance throughout the system's lifecycle
- Identify emerging risks or compliance issues
- Feed findings back into the risk management system
- Trigger corrective actions when necessary
Serious Incident Reporting (Article 73)
Establish a process to report serious incidents to the relevant market surveillance authority. A serious incident is one that directly or indirectly leads to, or is realistically likely to lead to:
- Death or serious damage to health
- Serious and irreversible disruption of critical infrastructure
- Breach of fundamental rights obligations
Reports must be submitted within specific timeframes — immediately for ongoing incidents, and within 15 days of becoming aware of a serious incident that has concluded.
Deployer Monitoring Obligations
Deployers must monitor AI system performance based on the provider's instructions, inform the provider and relevant authority of serious incidents or malfunctions, and suspend use if the system presents an unexpected risk.
Step 8: Train Your Team (AI Literacy)
Article 4 of the AI Act requires providers and deployers to ensure a sufficient level of AI literacy among their staff. This obligation is already in force as of February 2, 2025.
What AI Literacy Covers
- Understanding how AI systems work at a level appropriate to each role
- Knowing the organisation's obligations under the AI Act
- Recognising risks, limitations, and potential biases of AI systems
- Understanding when and how to exercise human oversight
- Awareness of fundamental rights implications
Implementation Approach
- Assess current literacy levels across the organisation
- Define role-specific learning objectives (general staff, AI users, developers, leadership)
- Deliver training through appropriate channels (workshops, e-learning, expert sessions)
- Test comprehension and document training records
- Update training as systems and regulations evolve
AI literacy is not a one-time training event. It is an ongoing obligation that must be maintained as new AI systems are deployed, existing systems are updated, and regulatory guidance matures.
Step 9: Document Everything
Documentation is the backbone of AI Act compliance. Without it, you cannot demonstrate compliance regardless of how good your actual practices are.
Key Documentation Artefacts
- AI system inventory — complete register of all AI systems with classifications and roles
- Risk classifications — documented rationale for each system's risk classification, including any reliance on the Article 6(3) exception
- Risk management system records — risk assessments, mitigation measures, testing results, and iteration history
- Data governance records — dataset documentation, bias assessments, data quality reports
- Technical documentation — comprehensive Annex IV documentation for each high-risk system
- Conformity assessment records — evidence of assessment, EU declarations of conformity, certificates from notified bodies
- Logging and monitoring records — system logs, performance monitoring data, incident reports
- Training records — AI literacy training plans, attendance, assessment results
- Quality management system documentation — policies, procedures, and audit records
- Post-market monitoring records — surveillance data, analysis reports, corrective actions
Documentation Best Practices
Keep it current. Documentation that does not reflect the current state of your AI systems is worse than useless — it can demonstrate non-compliance.
Make it accessible. Documentation must be available to market surveillance authorities upon request. Organise it so that relevant information can be located and provided promptly.
Version control. Maintain version histories for all documentation, particularly technical documentation and risk assessments, to demonstrate how your compliance programme has evolved.
Proportionality. The depth and complexity of documentation should be proportionate to the risk level and complexity of the AI system. A minimal-risk chatbot does not need the same documentation as a high-risk credit scoring system.
Step 10: Plan for Ongoing Compliance
AI Act compliance is not a project with a completion date — it is an ongoing programme.
Continuous Activities
Monitor regulatory developments. The AI Act framework will continue to evolve through implementing acts, delegated acts, harmonised standards, codes of practice, and regulatory guidance. Assign responsibility for tracking and integrating these developments.
Review and update risk assessments. As your AI systems evolve, as deployment contexts change, and as new risks emerge, your risk assessments must be updated.
Audit regularly. Conduct periodic internal audits of your AI systems against the regulation's requirements. Identify and address compliance drift before regulators do.
Manage the supply chain. If you rely on third-party AI systems, monitor your providers' compliance status. Request updated documentation, test reports, and certificates as they become available.
Adapt to enforcement. As national authorities begin enforcement actions and precedents emerge, adjust your compliance programme accordingly. Early enforcement actions will provide practical guidance on regulatory expectations.
Plan for system changes. Any substantial modification to a high-risk AI system may trigger the need for a new conformity assessment (Article 43(4)). Build compliance review into your change management process.
Simplify ongoing AI Act compliance
Ctrl AI provides continuous, automated documentation of AI decision-making — execution traces, trust-tagged outputs, and audit-ready records that keep your compliance programme current.
Learn About Ctrl AICompliance Timeline Overview
Where to Start If You Are Behind
If your organisation has not yet begun its AI Act compliance programme, here is a pragmatic prioritisation:
Immediate (this month):
- Conduct a rapid AI system inventory
- Screen for prohibited practices — discontinue any that qualify
- Launch a basic AI literacy awareness programme
Short-term (next quarter):
- Complete risk classification for all inventoried systems
- Identify your role (provider/deployer) for each system
- Begin gap analysis for high-risk systems
Medium-term (next six months):
- Implement required controls for high-risk systems (risk management, data governance, documentation, human oversight, logging)
- Establish quality management system
- Prepare for conformity assessment
Before August 2026:
- Complete conformity assessments for all high-risk systems
- Establish post-market monitoring systems
- Ensure all documentation is complete and current
- Test your serious incident reporting process
The organisations that start now, even if imperfectly, will be in a far stronger position than those that wait for perfect clarity. The regulation is law. The deadlines are fixed. The only variable is how well-prepared your organisation will be when enforcement begins.
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AI