Regulationfaqeu-ai-actcompliance

EU AI Act FAQ: Answers to 30+ Common Questions

Practical answers to the most common EU AI Act questions — scope, timelines, fines, classification, GPAI, GDPR overlap, and what companies need to do to comply with Regulation 2024/1689.

May 12, 202612 min read

The EU AI Act (Regulation 2024/1689) is the most comprehensive AI law in the world, and it raises a lot of practical questions for the companies that have to comply with it. This FAQ collects the most frequently asked questions about the regulation — from basic scope and timelines to specific obligations for providers, deployers, and general-purpose AI model developers — and gives concise, citation-backed answers.

If you need deeper material on a topic, every answer links to a dedicated article.

Scope and Applicability

Does the EU AI Act apply to my company?

Almost certainly yes if any of the following is true:

  • You place an AI system on the EU market, regardless of where you are established (Article 2(1)(a))
  • You put an AI system into service in the EU (Article 2(1)(a))
  • You are a deployer of an AI system established or located in the EU (Article 2(1)(b))
  • You are a provider or deployer outside the EU whose AI system's output is used in the EU (Article 2(1)(c))
  • You are an importer or distributor of AI systems that ends up on the EU market (Article 2(1)(d)–(e))

The "output used in the EU" trigger is broad. A US company running an AI hiring tool that screens candidates in Germany falls within the regulation even if it has no EU office.

Are there any exemptions from the EU AI Act?

Article 2 lists several specific exemptions:

  • National security and defence: AI systems used exclusively for military, defence, or national-security purposes are excluded (Article 2(3))
  • Research and development: AI systems developed solely for scientific research and development are excluded, but only until they are placed on the market or put into service (Article 2(6))
  • Personal non-professional activities: AI used by individuals for purely personal activities is excluded (Article 2(10))
  • Free and open-source AI components: AI components released under free and open-source licences are partially exempt, except when they are placed on the market or put into service as high-risk systems, prohibited practices, or limited-risk systems with transparency obligations (Article 2(12))

There is no exemption for small companies. SMEs and startups are fully subject to the regulation, although Article 99(6) provides that the lower of two fine amounts (absolute or percentage) applies to them.

Does the EU AI Act apply to AI systems already on the market before August 2024?

Mostly yes, but with transitional periods:

  • GPAI models placed on the market before 2 August 2025 have until 2 August 2027 to comply
  • High-risk AI systems placed on the market before 2 August 2026 generally do not have to comply with the regulation, unless they undergo significant changes in their design after that date (Article 111)
  • Operators of high-risk AI systems intended for use by public authorities must bring legacy systems into compliance by 2 August 2030 (Article 111(2))

In practice, organisations should not rely on grandfathering. Most legacy systems are updated frequently enough that they will trigger compliance obligations at some point.

Timelines and Deadlines

When does the EU AI Act take effect?

The regulation follows a phased timeline:

How much time do I have to prepare for August 2026?

If you are reading this in May 2026, you have less than three months before the bulk of the regulation applies. For a high-risk AI system, that timeline is tight. A minimum-viable compliance programme — risk management, technical documentation, data governance, human oversight design, conformity assessment — takes most organisations six to twelve months to set up.

If you have not yet started, prioritise: (1) inventorying your AI systems, (2) classifying each one, (3) addressing any prohibited practices immediately, and (4) building a compliance roadmap for high-risk systems. The compliance checklist for CTOs and CIOs provides a practical starting point.

Risk Classification

How do I know if my AI system is high-risk?

A system is high-risk if it meets either of two tests:

  1. Annex I test: The system is a safety component of, or itself constitutes, a product covered by Union harmonisation legislation listed in Annex I (medical devices, machinery, toys, automotive, aviation, marine, etc.) and that product is required to undergo a third-party conformity assessment.

  2. Annex III test: The system is a standalone AI system used in one of eight sensitive areas: biometrics; critical infrastructure; education and vocational training; employment, workers management, and access to self-employment; access to essential private and public services; law enforcement; migration, asylum, and border control; or administration of justice and democratic processes.

Article 6(3) introduced an important carve-out: even if a system would otherwise qualify under Annex III, it is not high-risk if it does not pose a significant risk of harm — for example, when it performs a narrow procedural task, improves the result of a previously completed human activity, or detects decision-making patterns without replacing human decision-making.

Are recommendation systems high-risk?

In most cases, no. Standard product, music, or content recommendation systems are typically classified as minimal-risk or limited-risk. They become high-risk only in specific scenarios — for example, when used to allocate access to essential services, or when integrated into a high-risk use case like education or employment.

Note that very large online platforms (VLOPs) and very large online search engines (VLOSEs) face additional, separate obligations under the Digital Services Act for their recommender systems, including transparency and user-control requirements.

Are chatbots high-risk?

Chatbots are usually limited-risk, not high-risk. Article 50(1) requires that natural persons interacting with a chatbot be informed that they are dealing with an AI, unless this is obvious from context. A chatbot becomes high-risk only when it is deployed in a high-risk use case — for instance, a chatbot that screens job applications would fall under Annex III, point 4 (employment) and be subject to the full high-risk regime.

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

Specific Obligations

What documentation do I need for a high-risk AI system?

Article 11 and Annex IV specify the technical documentation requirements. The documentation must include, among other elements:

  • A general description of the AI system, its intended purpose, and previous versions
  • A detailed description of the system's components, including the algorithms, datasets, training methodology, and model design choices
  • Information about the data, including provenance, scope, and characteristics
  • A risk management plan compliant with Article 9
  • A description of the human oversight measures
  • Performance metrics and accuracy specifications
  • Cybersecurity measures
  • A copy of the EU declaration of conformity

The documentation must be kept up to date for the lifetime of the system and must be made available to national competent authorities on request for at least ten years after placing on the market.

What is the conformity assessment procedure?

Conformity assessment is the procedure by which a provider verifies that a high-risk AI system meets the requirements in Articles 8–15. For most Annex III high-risk systems, the procedure is internal control (Annex VI) — the provider performs the assessment itself. For certain biometric systems and for high-risk AI systems that are part of products under Annex I, third-party assessment by a notified body is required.

After successful conformity assessment, the provider issues an EU declaration of conformity, affixes the CE marking, and (for Annex III systems) registers the system in the EU database before placing it on the market.

Do I need to appoint a person responsible for AI compliance?

The regulation does not explicitly require a "Chief AI Officer," but it imposes a number of obligations that typically require named accountability:

  • Article 4 requires providers and deployers to ensure their staff have sufficient AI literacy
  • Article 22 requires non-EU providers to designate an authorised representative in the EU
  • Article 26 imposes deployer obligations that need a clear owner
  • Article 17 requires providers to establish a quality management system

In practice, most organisations subject to the AI Act create a designated AI compliance function, often combined with the existing data protection officer (DPO) role under the GDPR.

General-Purpose AI (GPAI)

What is a general-purpose AI model?

Article 3(63) defines a GPAI model as one that displays significant generality, is capable of competently performing a wide range of distinct tasks regardless of how it is placed on the market, and that can be integrated into a variety of downstream systems. Large language models, multimodal models, and other foundation models all fall within this definition.

What is a GPAI model with systemic risk?

Article 51(1)(a) presumes systemic risk if the cumulative training compute exceeds 10^25 floating-point operations (FLOPs). The Commission can also designate a model as having systemic risk based on other criteria — capability benchmarks, number of users, ecosystem impact, etc. — under Article 51(1)(b).

Models with systemic risk face additional obligations under Article 55: model evaluation including adversarial testing, systemic-risk mitigation, serious-incident reporting to the AI Office, and cybersecurity measures.

Are open-source GPAI models exempt from the AI Act?

Partially. Article 53(2) exempts providers of free and open-source GPAI models from some transparency obligations — but only where the model is genuinely open (weights, architecture, and usage information all publicly available) and is not placed on the market for a fee. The exemption does not extend to GPAI models with systemic risk, which remain fully regulated regardless of licensing terms.

The open-source AI exemption is narrower than many developers assume; fine-tuning, hosting for a fee, or bundling with commercial services can all defeat the exemption.

GDPR and Other Regulations

How does the EU AI Act interact with the GDPR?

The two regulations apply in parallel. The GDPR governs the lawful processing of personal data; the AI Act governs the design and deployment of AI systems. When an AI system processes personal data — and most do — both regimes apply simultaneously.

Many concepts overlap but are not identical. Both require risk management; both require documentation; both impose transparency obligations. But the AI Act's risk management is system-centric, while the GDPR's is data-centric. The AI Act also adds requirements (such as data governance under Article 10 and human oversight under Article 14) that go beyond GDPR.

Does the EU AI Act override the Medical Devices Regulation?

No. For AI systems that qualify as medical devices, both regimes apply. Article 43(3) of the AI Act allows the conformity assessment to be integrated with the MDR/IVDR assessment, but the substantive requirements of both regulations must still be met. The result in practice is a coordinated but not consolidated compliance programme.

What is the relationship between the EU AI Act and the Digital Services Act?

The DSA imposes systemic-risk and content-moderation obligations on intermediary services. Where an AI system is used by a very large online platform (VLOP) or very large online search engine (VLOSE) to power its recommendation systems, content moderation, or ad targeting, both the DSA and the AI Act may apply.

Enforcement and Penalties

Who enforces the EU AI Act?

Enforcement is split between EU and national authorities:

  • The European Commission, through the AI Office, has exclusive jurisdiction over GPAI model providers
  • National market surveillance authorities in each Member State enforce the regulation against providers and deployers of AI systems within their territory
  • National notifying authorities designate and supervise notified bodies that conduct conformity assessments
  • The European Artificial Intelligence Board coordinates implementation across Member States

How much have companies actually been fined so far?

Information about specific fines is limited because public enforcement is just beginning. Article 5 prohibitions became enforceable in February 2025, GPAI obligations in August 2025, and most high-risk obligations are not yet in force. As of mid-2026, several Member States have published guidance and reportedly opened investigations, but most enforcement remains at the guidance and warning stage. Significant fines are expected from 2027 onward as the high-risk regime takes full effect.

Can I be fined under both the AI Act and the GDPR for the same conduct?

In principle, yes — if the conduct violates both regulations. The two regimes protect different interests (AI safety vs. data protection) and have different competent authorities. Member States and the Commission are expected to coordinate to avoid bis in idem (double-punishment) issues, but the regulation does not exclude parallel enforcement.

Practical Next Steps

What should I do first if my company has AI?

A practical four-step starting plan:

  1. Inventory every AI system your organisation develops, deploys, or uses. Include third-party tools, embedded AI features in SaaS products, and internal models.
  2. Classify each system against the risk framework. Pay special attention to any system touching biometrics, employment, education, essential services, law enforcement, or critical infrastructure.
  3. Address prohibited practices immediately. They have been enforceable since February 2025; any in-scope system should be stopped, modified, or replaced.
  4. Build a compliance roadmap for high-risk and GPAI systems, working backwards from the 2 August 2026 deadline.

Who can help me comply with the EU AI Act?

You will likely need a combination of legal counsel (for regulatory interpretation), in-house engineering and product teams (for technical implementation), and possibly a third-party platform that supports AI governance — for example by providing audit-ready documentation, execution traces, and trust-tagged outputs. The EU also encourages regulatory sandboxes (Article 57) that Member States must establish by August 2026 to support innovation under supervision.

Conclusion

The EU AI Act is broad, technical, and consequential — but it is also navigable. Most organisations subject to it can build a workable compliance programme by understanding the risk framework, mapping their AI systems against it, prioritising prohibited and high-risk obligations, and putting durable governance in place. The questions in this FAQ are the ones companies ask most often; the linked deep-dive articles answer them in much more depth.

If a specific question is not covered here, our complete EU AI Act overview is the best starting point for the full regulatory picture.

Frequently Asked Questions

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes harmonised rules for the development, market placement, deployment, and use of AI systems across the European Union, using a risk-based approach with four tiers: unacceptable, high, limited, and minimal risk.

When did the EU AI Act enter into force?

The EU AI Act entered into force on 1 August 2024, twenty days after its publication in the Official Journal on 12 July 2024. Different provisions apply at different dates between February 2025 and August 2027.

Does the EU AI Act apply to companies outside the European Union?

Yes. Article 2 establishes a broad territorial scope: the regulation applies to providers placing AI systems on the EU market regardless of where they are established, to deployers in the EU, and to providers and deployers outside the EU whenever the output produced by their AI system is used inside the Union.

What is the maximum fine under the EU AI Act?

Article 99 sets the maximum administrative fine at €35 million or 7% of total worldwide annual turnover, whichever is higher, for violations of the prohibited AI practices listed in Article 5. Lower tiers apply to other violations: €15 million / 3% for high-risk non-compliance and €7.5 million / 1% for incorrect information to authorities.

What are the four risk levels under the EU AI Act?

Unacceptable risk (prohibited under Article 5), high risk (Articles 6–7 and Annexes I and III), limited risk (transparency obligations under Article 50), and minimal risk (no specific obligations beyond existing law).

When do high-risk AI system obligations apply?

Most high-risk obligations apply from 2 August 2026, including all standalone high-risk systems listed in Annex III. High-risk systems that are safety components of products covered by existing EU harmonisation legislation under Annex I follow a longer timeline, with full application from 2 August 2027.

Does the EU AI Act apply to ChatGPT, Claude, and other large language models?

Yes. Large language models fall under the General-Purpose AI Model regime in Chapter V. All GPAI providers must maintain technical documentation, publish a summary of training content, and respect copyright law. Models trained with more than 10^25 FLOPs are presumed to have systemic risk and face additional obligations including model evaluations, adversarial testing, and incident reporting.

What is the difference between a provider and a deployer?

A provider (Article 3(3)) develops an AI system or GPAI model and places it on the market or puts it into service under its own name or trademark. A deployer (Article 3(4)) uses an AI system under its authority in the course of a professional activity. Providers bear most of the substantive obligations for high-risk systems; deployers have a narrower set focused on operating the system correctly, monitoring it, and (for public-sector deployers) conducting a fundamental rights impact assessment.

Is the EU AI Act the same as the GDPR?

No. The GDPR (Regulation 2016/679) governs the processing of personal data. The EU AI Act governs the design, market placement, and use of AI systems regardless of whether they process personal data. Many AI systems are subject to both regimes simultaneously: the AI Act governs the system's classification and safety, while the GDPR governs any personal-data processing.

Do I need to register my AI system with the EU?

Providers and deployers of high-risk AI systems listed in Annex III (with limited exceptions) must register the system in the EU database before placing it on the market or putting it into service, under Article 49. Deployers that are public authorities or Union institutions also have registration obligations. Lower-risk systems do not require registration.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles