EU AI Act Glossary: 50+ Key Terms Defined
Plain-language definitions of the most important EU AI Act terms — AI system, provider, deployer, GPAI, high-risk, conformity assessment, and more, with article references.
The EU AI Act introduces a substantial new vocabulary. Many terms — provider, deployer, substantial modification, intended purpose, general-purpose AI model — have precise legal meanings that differ from their everyday usage. Misreading a definition can mean misclassifying a system or missing an obligation.
This glossary collects the most important terms in the regulation. Definitions are paraphrased for clarity but are referenced back to Article 3 of Regulation (EU) 2024/1689 (the definitions article) or to the specific provision that introduces the term. For deeper context on any concept, follow the links to the dedicated articles.
A
AI System
Article 3(1). A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. The definition deliberately tracks the OECD definition to support international alignment.
AI Office
Article 64. A body within the European Commission with exclusive jurisdiction over GPAI model providers and a central role in coordinating the application of the regulation. The AI Office supervises systemic-risk model evaluations and serves as the central hub for compliance interaction with foundation-model providers.
AI Literacy
Article 3(56). Skills, knowledge, and understanding that allow providers, deployers, and affected persons, taking into account their respective rights and obligations, to make an informed deployment of AI systems and to gain awareness about the opportunities, risks, and possible harms that AI can cause. Article 4 requires providers and deployers to ensure a sufficient level of AI literacy among their staff.
Annex I
Lists Union harmonisation legislation that triggers high-risk classification when an AI system is a safety component (or itself constitutes) a product covered by that legislation. Examples include the Machinery Directive, the Medical Devices Regulation, the In Vitro Diagnostic Regulation, the Toy Safety Directive, and the Radio Equipment Directive.
Annex III
Lists eight categories of standalone AI systems classified as high-risk under Article 6(2): (1) biometrics; (2) critical infrastructure; (3) education and vocational training; (4) employment, workers management, and access to self-employment; (5) access to and enjoyment of essential private and public services; (6) law enforcement; (7) migration, asylum, and border control; (8) administration of justice and democratic processes.
Annex IV
Lists the contents of the technical documentation a provider must prepare for a high-risk AI system. See technical documentation requirements.
Article 5
The provision listing prohibited AI practices. Enforceable since 2 February 2025. Violations attract the highest fines under the regulation (€35M / 7% of turnover).
Authorised Representative
Article 3(5) and Article 22. A natural or legal person established in the Union who, by written mandate, undertakes to act on behalf of a non-EU provider in relation to the obligations and procedures established by the regulation. Non-EU providers placing high-risk AI systems on the EU market must appoint one before doing so.
B
Biometric Categorisation
Article 3(40). Assigning natural persons to specific categories on the basis of their biometric data. When such categorisation is used to infer sensitive characteristics (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, sexual orientation), it is prohibited under Article 5(1)(g).
Biometric Data
Article 3(34), aligned with GDPR. Personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a natural person, allowing or confirming the unique identification of that person.
Biometric Identification
Article 3(35). The automated recognition of a person's identity by comparing their biometric data with biometric data of individuals stored in a database.
Biometric Verification
Article 3(36). The automated, one-to-one verification of a person's identity by comparing their biometric data with previously provided biometric data.
C
CE Marking
Article 48 requires providers of high-risk AI systems to affix the CE marking before placing the system on the market. The marking indicates that the provider declares the system to comply with applicable Union harmonisation legislation, including the AI Act. For AI systems integrated into products under Annex I, the CE marking covers both the AI Act and the underlying product legislation.
Common Specifications
Article 41. Technical specifications that the Commission may adopt by implementing act when harmonised standards do not exist or are inadequate. Compliance with common specifications provides a presumption of conformity with the relevant requirements.
Conformity Assessment
Article 3(20). The process of demonstrating whether the requirements set out in Chapter III, Section 2 of the regulation, relating to a high-risk AI system, have been fulfilled. See conformity assessment procedures.
Critical Infrastructure
Annex III, point 2 includes AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, and electricity. Such systems are classified as high-risk.
D
Deployer
Article 3(4). A natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Deployers have obligations under Article 26, including operating the system in accordance with instructions for use, monitoring its operation, ensuring human oversight, and (for some deployers) conducting a fundamental rights impact assessment.
Distributor
Article 3(7). A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market. Article 27 imposes verification and documentation obligations on distributors.
E
Emotion Recognition System
Article 3(39). An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. Emotion recognition is prohibited in workplaces and educational institutions under Article 5(1)(f) (with limited exceptions for medical and safety purposes); elsewhere it is subject to Article 50 transparency obligations.
EU Database
Article 71. A central database operated by the Commission in which providers and deployers of high-risk AI systems listed in Annex III must register their systems before placing them on the market or putting them into service.
European Artificial Intelligence Board
Article 65. A body composed of representatives from each Member State that advises the Commission and facilitates consistent application of the regulation across the EU.
F
Foundation Model
Not formally defined in the regulation. Commonly used synonymously with general-purpose AI model (Article 3(63)).
Fundamental Rights Impact Assessment (FRIA)
Article 27. An assessment that public-sector deployers of high-risk AI systems listed in points 5(b) and (c) of Annex III (essential public services), point 6 (law enforcement), or point 7 (migration) must conduct before deploying the system. The FRIA identifies risks to fundamental rights and the measures taken to mitigate them.
G
General-Purpose AI Model (GPAI)
Article 3(63). An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. See GPAI obligations.
General-Purpose AI System
Article 3(66). An AI system based on a GPAI model that has the capability to serve a variety of purposes, both for direct use and for integration in other AI systems.
GPAI Model with Systemic Risk
Article 51. A GPAI model that has high-impact capabilities, measured by computing resources used for training (presumption above 10^25 FLOPs) or designated as such by the Commission. Subject to additional obligations under Article 55: model evaluation, adversarial testing, incident reporting, and cybersecurity.
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AIH
Harmonised Standard
A European standard adopted under Regulation (EU) 1025/2012. Compliance with a harmonised standard provides a presumption of conformity with the corresponding legal requirements. The Commission has mandated CEN-CENELEC to develop harmonised standards supporting the AI Act.
High-Risk AI System
Articles 6 and 7. An AI system that falls into one of two categories: (a) a safety component of, or itself a product covered by, Union harmonisation legislation listed in Annex I and required to undergo third-party conformity assessment; or (b) a standalone AI system used in one of the areas listed in Annex III. Subject to the substantive requirements of Articles 8–15. See high-risk AI systems: complete requirements.
Human Oversight
Article 14. The requirement that high-risk AI systems be designed and developed to allow effective oversight by natural persons during the period in which the AI system is in use. See human oversight requirements.
I
Importer
Article 3(6). A natural or legal person located or established in the Union that places on the market an AI system bearing the name or trademark of a natural or legal person established outside the Union.
Intended Purpose
Article 3(12). The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
L
Limited-Risk AI System
Not formally defined, but used colloquially to describe AI systems subject to transparency obligations under Article 50 without being classified as high-risk. Examples include chatbots, deepfake-generating systems, and AI-generated content tools.
M
Making Available on the Market
Article 3(10). Any supply of an AI system or a GPAI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.
Market Surveillance Authority
Article 70. A national authority designated by each Member State to monitor compliance with the regulation, investigate suspected violations, and impose corrective measures or penalties.
N
National Competent Authority
Article 70. The national notifying authority and/or market surveillance authority designated by each Member State.
Notified Body
Article 3(22) and Articles 29–39. A conformity assessment body designated by a Member State's notifying authority to conduct third-party conformity assessments of high-risk AI systems where required.
O
Operator
Article 3(8). A collective term covering providers, product manufacturers, deployers, authorised representatives, importers, and distributors.
P
Performance
Article 3(18). The ability of an AI system to achieve its intended purpose.
Placing on the Market
Article 3(9). The first making available of an AI system or a GPAI model on the Union market.
Post-Market Monitoring
Article 72. The obligation on providers of high-risk AI systems to collect, document, and analyse relevant data on the system's performance after placing on the market, throughout its lifetime.
Provider
Article 3(3). A natural or legal person, public authority, agency, or other body that develops an AI system or a GPAI model or that has an AI system or a GPAI model developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
Putting into Service
Article 3(11). The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
R
Real-Time Remote Biometric Identification
Article 3(42). A remote biometric identification system whereby the capturing of biometric data, the comparison, and the identification all occur without significant delay. Use in publicly accessible spaces for law enforcement is prohibited under Article 5(1)(h), subject to narrow exceptions.
Recital
Numbered explanatory paragraphs at the beginning of an EU regulation that set out the legislative reasoning. Recitals do not have binding effect by themselves but are used by courts and authorities to interpret the operative provisions. The AI Act has 180 recitals.
Regulatory Sandbox
Article 57. A controlled environment established by a Member State to facilitate the development, training, testing, and validation of innovative AI systems for a limited time before their placement on the market or putting into service, under regulatory supervision. Each Member State must establish at least one sandbox by 2 August 2026.
Remote Biometric Identification System
Article 3(41). An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person's biometric data with the biometric data contained in a reference database.
Risk
Article 3(2). The combination of the probability of an occurrence of harm and the severity of that harm.
Risk Management System
Article 9. A continuous, iterative process planned and run throughout the entire lifecycle of a high-risk AI system, identifying, evaluating, and managing risks to health, safety, and fundamental rights.
S
Safety Component
Article 3(14). A component of a product or of an AI system that fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.
Serious Incident
Article 3(49). An incident or malfunctioning of an AI system that directly or indirectly leads to (a) death or serious damage to health; (b) serious and irreversible disruption of management or operation of critical infrastructure; (c) infringement of obligations under Union law intended to protect fundamental rights; or (d) serious damage to property or environment. Providers of high-risk AI systems must report serious incidents under Article 73.
Subliminal Techniques
Referenced in Article 5(1)(a). Techniques operating below the threshold of human consciousness, used to materially distort behaviour. Prohibited when they cause or are reasonably likely to cause significant harm.
Substantial Modification
Article 3(23). A change to an AI system that was not foreseen or planned in the initial conformity assessment, that affects compliance with the requirements of Chapter III, Section 2, or results in a change to the intended purpose.
Systemic Risk
Article 3(65). A risk specific to high-impact capabilities of GPAI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, security, fundamental rights, or society as a whole.
T
Testing in Real World Conditions
Articles 60–61. Testing of an AI system or GPAI model in real-world conditions outside of a laboratory or otherwise simulated environment, for the purpose of demonstrating compliance or further developing the system. Subject to specific safeguards.
Trained Model
Not formally defined. Used in the regulation to describe the output of training: a model that has learned parameters from training data.
Training Data
Article 3(29). Data used for training an AI system, fitting its learnable parameters.
Transparency Obligations
Article 50. Obligations on providers and deployers of certain AI systems to inform natural persons that they are interacting with an AI (chatbots), that content has been generated or manipulated by AI (deepfakes), or that emotion recognition or biometric categorisation is being applied.
V
Validation Data
Article 3(30). Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process.
How to Use This Glossary
Each term is defined in the way it appears in the regulation, not in the way it is sometimes used informally in the AI industry. When in doubt about scope or obligations, the operative article and the recitals (which provide interpretive context) are the authoritative source.
For a structured overview of how these terms fit together, start with the complete EU AI Act overview or the risk classification system article. For practical compliance, the compliance checklist for CTOs and CIOs translates the vocabulary into action.
Frequently Asked Questions
Where are the EU AI Act definitions located in the regulation?
What is an AI system under the EU AI Act?
What is the difference between an 'AI system' and an 'AI model' under the EU AI Act?
What does 'placing on the market' mean under the EU AI Act?
What is a 'substantial modification' under the EU AI Act?
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
EU AI Act FAQ: Answers to 30+ Common Questions
Practical answers to the most common EU AI Act questions — scope, timelines, fines, classification, GPAI, GDPR overlap, and what companies need to do to comply with Regulation 2024/1689.
EU AI Act: Complete Overview of Europe's AI Regulation
Everything you need to know about the EU AI Act (Regulation 2024/1689) — the world's first comprehensive AI law covering risk classification, compliance requirements, and enforcement timeline.
Annex I Explained: AI in Regulated Products Under the EU AI Act
How Annex I of the EU AI Act classifies AI systems embedded in regulated products — medical devices, machinery, toys, vehicles, aviation, marine, and more. Conformity assessment, deadlines, and the MDR/IVDR interaction.