EU AI Act FAQ: Answers to 30+ Common Questions
Practical answers to the most common EU AI Act questions — scope, timelines, fines, classification, GPAI, GDPR overlap, and what companies need to do to comply with Regulation 2024/1689.
The EU AI Act (Regulation 2024/1689) is the most comprehensive AI law in the world, and it raises a lot of practical questions for the companies that have to comply with it. This FAQ collects the most frequently asked questions about the regulation — from basic scope and timelines to specific obligations for providers, deployers, and general-purpose AI model developers — and gives concise, citation-backed answers.
If you need deeper material on a topic, every answer links to a dedicated article.
Scope and Applicability
Does the EU AI Act apply to my company?
Almost certainly yes if any of the following is true:
- You place an AI system on the EU market, regardless of where you are established (Article 2(1)(a))
- You put an AI system into service in the EU (Article 2(1)(a))
- You are a deployer of an AI system established or located in the EU (Article 2(1)(b))
- You are a provider or deployer outside the EU whose AI system's output is used in the EU (Article 2(1)(c))
- You are an importer or distributor of AI systems that ends up on the EU market (Article 2(1)(d)–(e))
The "output used in the EU" trigger is broad. A US company running an AI hiring tool that screens candidates in Germany falls within the regulation even if it has no EU office.
Are there any exemptions from the EU AI Act?
Article 2 lists several specific exemptions:
- National security and defence: AI systems used exclusively for military, defence, or national-security purposes are excluded (Article 2(3))
- Research and development: AI systems developed solely for scientific research and development are excluded, but only until they are placed on the market or put into service (Article 2(6))
- Personal non-professional activities: AI used by individuals for purely personal activities is excluded (Article 2(10))
- Free and open-source AI components: AI components released under free and open-source licences are partially exempt, except when they are placed on the market or put into service as high-risk systems, prohibited practices, or limited-risk systems with transparency obligations (Article 2(12))
There is no exemption for small companies. SMEs and startups are fully subject to the regulation, although Article 99(6) provides that the lower of two fine amounts (absolute or percentage) applies to them.
Does the EU AI Act apply to AI systems already on the market before August 2024?
Mostly yes, but with transitional periods:
- GPAI models placed on the market before 2 August 2025 have until 2 August 2027 to comply
- High-risk AI systems placed on the market before 2 August 2026 generally do not have to comply with the regulation, unless they undergo significant changes in their design after that date (Article 111)
- Operators of high-risk AI systems intended for use by public authorities must bring legacy systems into compliance by 2 August 2030 (Article 111(2))
In practice, organisations should not rely on grandfathering. Most legacy systems are updated frequently enough that they will trigger compliance obligations at some point.
Timelines and Deadlines
When does the EU AI Act take effect?
The regulation follows a phased timeline:
- 1 August 2024: Entry into force
- 2 February 2025: Prohibitions on unacceptable-risk practices apply
- 2 August 2025: GPAI obligations, governance provisions, and penalty provisions apply
- 2 August 2026: Most provisions apply, including all standalone high-risk AI system requirements under Annex III
- 2 August 2027: Full application, including high-risk systems that are safety components of products under Annex I
How much time do I have to prepare for August 2026?
If you are reading this in May 2026, you have less than three months before the bulk of the regulation applies. For a high-risk AI system, that timeline is tight. A minimum-viable compliance programme — risk management, technical documentation, data governance, human oversight design, conformity assessment — takes most organisations six to twelve months to set up.
If you have not yet started, prioritise: (1) inventorying your AI systems, (2) classifying each one, (3) addressing any prohibited practices immediately, and (4) building a compliance roadmap for high-risk systems. The compliance checklist for CTOs and CIOs provides a practical starting point.
Risk Classification
How do I know if my AI system is high-risk?
A system is high-risk if it meets either of two tests:
-
Annex I test: The system is a safety component of, or itself constitutes, a product covered by Union harmonisation legislation listed in Annex I (medical devices, machinery, toys, automotive, aviation, marine, etc.) and that product is required to undergo a third-party conformity assessment.
-
Annex III test: The system is a standalone AI system used in one of eight sensitive areas: biometrics; critical infrastructure; education and vocational training; employment, workers management, and access to self-employment; access to essential private and public services; law enforcement; migration, asylum, and border control; or administration of justice and democratic processes.
Article 6(3) introduced an important carve-out: even if a system would otherwise qualify under Annex III, it is not high-risk if it does not pose a significant risk of harm — for example, when it performs a narrow procedural task, improves the result of a previously completed human activity, or detects decision-making patterns without replacing human decision-making.
Are recommendation systems high-risk?
In most cases, no. Standard product, music, or content recommendation systems are typically classified as minimal-risk or limited-risk. They become high-risk only in specific scenarios — for example, when used to allocate access to essential services, or when integrated into a high-risk use case like education or employment.
Note that very large online platforms (VLOPs) and very large online search engines (VLOSEs) face additional, separate obligations under the Digital Services Act for their recommender systems, including transparency and user-control requirements.
Are chatbots high-risk?
Chatbots are usually limited-risk, not high-risk. Article 50(1) requires that natural persons interacting with a chatbot be informed that they are dealing with an AI, unless this is obvious from context. A chatbot becomes high-risk only when it is deployed in a high-risk use case — for instance, a chatbot that screens job applications would fall under Annex III, point 4 (employment) and be subject to the full high-risk regime.
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AISpecific Obligations
What documentation do I need for a high-risk AI system?
Article 11 and Annex IV specify the technical documentation requirements. The documentation must include, among other elements:
- A general description of the AI system, its intended purpose, and previous versions
- A detailed description of the system's components, including the algorithms, datasets, training methodology, and model design choices
- Information about the data, including provenance, scope, and characteristics
- A risk management plan compliant with Article 9
- A description of the human oversight measures
- Performance metrics and accuracy specifications
- Cybersecurity measures
- A copy of the EU declaration of conformity
The documentation must be kept up to date for the lifetime of the system and must be made available to national competent authorities on request for at least ten years after placing on the market.
What is the conformity assessment procedure?
Conformity assessment is the procedure by which a provider verifies that a high-risk AI system meets the requirements in Articles 8–15. For most Annex III high-risk systems, the procedure is internal control (Annex VI) — the provider performs the assessment itself. For certain biometric systems and for high-risk AI systems that are part of products under Annex I, third-party assessment by a notified body is required.
After successful conformity assessment, the provider issues an EU declaration of conformity, affixes the CE marking, and (for Annex III systems) registers the system in the EU database before placing it on the market.
Do I need to appoint a person responsible for AI compliance?
The regulation does not explicitly require a "Chief AI Officer," but it imposes a number of obligations that typically require named accountability:
- Article 4 requires providers and deployers to ensure their staff have sufficient AI literacy
- Article 22 requires non-EU providers to designate an authorised representative in the EU
- Article 26 imposes deployer obligations that need a clear owner
- Article 17 requires providers to establish a quality management system
In practice, most organisations subject to the AI Act create a designated AI compliance function, often combined with the existing data protection officer (DPO) role under the GDPR.
General-Purpose AI (GPAI)
What is a general-purpose AI model?
Article 3(63) defines a GPAI model as one that displays significant generality, is capable of competently performing a wide range of distinct tasks regardless of how it is placed on the market, and that can be integrated into a variety of downstream systems. Large language models, multimodal models, and other foundation models all fall within this definition.
What is a GPAI model with systemic risk?
Article 51(1)(a) presumes systemic risk if the cumulative training compute exceeds 10^25 floating-point operations (FLOPs). The Commission can also designate a model as having systemic risk based on other criteria — capability benchmarks, number of users, ecosystem impact, etc. — under Article 51(1)(b).
Models with systemic risk face additional obligations under Article 55: model evaluation including adversarial testing, systemic-risk mitigation, serious-incident reporting to the AI Office, and cybersecurity measures.
Are open-source GPAI models exempt from the AI Act?
Partially. Article 53(2) exempts providers of free and open-source GPAI models from some transparency obligations — but only where the model is genuinely open (weights, architecture, and usage information all publicly available) and is not placed on the market for a fee. The exemption does not extend to GPAI models with systemic risk, which remain fully regulated regardless of licensing terms.
The open-source AI exemption is narrower than many developers assume; fine-tuning, hosting for a fee, or bundling with commercial services can all defeat the exemption.
GDPR and Other Regulations
How does the EU AI Act interact with the GDPR?
The two regulations apply in parallel. The GDPR governs the lawful processing of personal data; the AI Act governs the design and deployment of AI systems. When an AI system processes personal data — and most do — both regimes apply simultaneously.
Many concepts overlap but are not identical. Both require risk management; both require documentation; both impose transparency obligations. But the AI Act's risk management is system-centric, while the GDPR's is data-centric. The AI Act also adds requirements (such as data governance under Article 10 and human oversight under Article 14) that go beyond GDPR.
Does the EU AI Act override the Medical Devices Regulation?
No. For AI systems that qualify as medical devices, both regimes apply. Article 43(3) of the AI Act allows the conformity assessment to be integrated with the MDR/IVDR assessment, but the substantive requirements of both regulations must still be met. The result in practice is a coordinated but not consolidated compliance programme.
What is the relationship between the EU AI Act and the Digital Services Act?
The DSA imposes systemic-risk and content-moderation obligations on intermediary services. Where an AI system is used by a very large online platform (VLOP) or very large online search engine (VLOSE) to power its recommendation systems, content moderation, or ad targeting, both the DSA and the AI Act may apply.
Enforcement and Penalties
Who enforces the EU AI Act?
Enforcement is split between EU and national authorities:
- The European Commission, through the AI Office, has exclusive jurisdiction over GPAI model providers
- National market surveillance authorities in each Member State enforce the regulation against providers and deployers of AI systems within their territory
- National notifying authorities designate and supervise notified bodies that conduct conformity assessments
- The European Artificial Intelligence Board coordinates implementation across Member States
How much have companies actually been fined so far?
Information about specific fines is limited because public enforcement is just beginning. Article 5 prohibitions became enforceable in February 2025, GPAI obligations in August 2025, and most high-risk obligations are not yet in force. As of mid-2026, several Member States have published guidance and reportedly opened investigations, but most enforcement remains at the guidance and warning stage. Significant fines are expected from 2027 onward as the high-risk regime takes full effect.
Can I be fined under both the AI Act and the GDPR for the same conduct?
In principle, yes — if the conduct violates both regulations. The two regimes protect different interests (AI safety vs. data protection) and have different competent authorities. Member States and the Commission are expected to coordinate to avoid bis in idem (double-punishment) issues, but the regulation does not exclude parallel enforcement.
Practical Next Steps
What should I do first if my company has AI?
A practical four-step starting plan:
- Inventory every AI system your organisation develops, deploys, or uses. Include third-party tools, embedded AI features in SaaS products, and internal models.
- Classify each system against the risk framework. Pay special attention to any system touching biometrics, employment, education, essential services, law enforcement, or critical infrastructure.
- Address prohibited practices immediately. They have been enforceable since February 2025; any in-scope system should be stopped, modified, or replaced.
- Build a compliance roadmap for high-risk and GPAI systems, working backwards from the 2 August 2026 deadline.
Who can help me comply with the EU AI Act?
You will likely need a combination of legal counsel (for regulatory interpretation), in-house engineering and product teams (for technical implementation), and possibly a third-party platform that supports AI governance — for example by providing audit-ready documentation, execution traces, and trust-tagged outputs. The EU also encourages regulatory sandboxes (Article 57) that Member States must establish by August 2026 to support innovation under supervision.
Conclusion
The EU AI Act is broad, technical, and consequential — but it is also navigable. Most organisations subject to it can build a workable compliance programme by understanding the risk framework, mapping their AI systems against it, prioritising prohibited and high-risk obligations, and putting durable governance in place. The questions in this FAQ are the ones companies ask most often; the linked deep-dive articles answer them in much more depth.
If a specific question is not covered here, our complete EU AI Act overview is the best starting point for the full regulatory picture.
Frequently Asked Questions
What is the EU AI Act?
When did the EU AI Act enter into force?
Does the EU AI Act apply to companies outside the European Union?
What is the maximum fine under the EU AI Act?
What are the four risk levels under the EU AI Act?
When do high-risk AI system obligations apply?
Does the EU AI Act apply to ChatGPT, Claude, and other large language models?
What is the difference between a provider and a deployer?
Is the EU AI Act the same as the GDPR?
Do I need to register my AI system with the EU?
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
EU AI Act: Complete Overview of Europe's AI Regulation
Everything you need to know about the EU AI Act (Regulation 2024/1689) — the world's first comprehensive AI law covering risk classification, compliance requirements, and enforcement timeline.
EU AI Act Glossary: 50+ Key Terms Defined
Plain-language definitions of the most important EU AI Act terms — AI system, provider, deployer, GPAI, high-risk, conformity assessment, and more, with article references.
EU AI Act Penalties: Fines Up to €35 Million Explained
Complete breakdown of EU AI Act penalties and fines — from €35 million for prohibited practices to €7.5 million for incorrect information. Understand the enforcement regime and how to avoid penalties.