Regulationprohibited-practicesbanned-aisocial-scoring

Prohibited AI Practices Under the EU AI Act

Complete list of AI practices banned by the EU AI Act — social scoring, manipulative AI, real-time biometric surveillance, and more. Understand what's prohibited and why.

February 10, 202513 min read

Article 5 of the EU AI Act (Regulation 2024/1689) draws a hard line. Certain uses of artificial intelligence are considered so fundamentally incompatible with EU values — human dignity, non-discrimination, democracy, and the rule of law — that they are banned outright, regardless of any safeguards a provider or deployer might put in place.

These prohibitions took effect on February 2, 2025, making them the first provisions of the regulation to become enforceable. Violations carry the highest penalty tier: up to €35 million or 7% of global annual turnover.

Unacceptable Risk

This article examines each prohibited practice in detail, explains the rationale behind the ban, and identifies the narrow exceptions where they exist.

1. Manipulative and Deceptive AI Techniques

Article 5(1)(a) prohibits the placing on the market, putting into service, or use of AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting a person's behaviour in a manner that causes or is reasonably likely to cause significant harm.

What This Covers

This prohibition targets AI systems designed to manipulate human decision-making in ways people cannot perceive or resist. Examples include:

  • AI-driven interfaces that use dark patterns enhanced by personalisation algorithms to manipulate purchasing decisions in ways that cause financial harm
  • Systems that deploy subliminal audio or visual stimuli to influence behaviour without the person's awareness
  • Deepfake-powered persuasion systems designed to deceive people into actions against their interests
  • AI systems that exploit cognitive biases in a systematic, personalised manner to distort behaviour

Key Nuances

The ban requires that the manipulation causes or is reasonably likely to cause significant harm — physical, psychological, or financial. Standard advertising, recommendation algorithms, and persuasive design are not automatically prohibited. The threshold is manipulation that goes beyond legitimate influence and into the territory of exploitation or deception.

The regulation distinguishes between "subliminal techniques" (operating below the threshold of consciousness) and "manipulative or deceptive techniques" (which may be perceptible but are designed to distort behaviour). Both are prohibited when they cause significant harm, but the distinction matters for legal analysis.

2. Exploitation of Vulnerabilities

Article 5(1)(b) prohibits AI systems that exploit vulnerabilities of a person or a specific group of persons due to their age, disability, or specific social or economic situation, with the objective or effect of materially distorting their behaviour in a manner that causes or is reasonably likely to cause significant harm.

What This Covers

This goes beyond the general manipulation ban to specifically protect groups who may be less able to identify or resist AI-driven influence:

  • AI systems targeting elderly people with misleading financial products tailored to exploit their reduced digital literacy
  • Gaming or app systems designed to exploit children's susceptibility to addictive mechanics
  • Predatory lending algorithms that specifically target people in financial distress
  • AI-powered scam systems that profile and target people based on indicators of cognitive decline

Relationship to the Manipulation Ban

While Article 5(1)(a) applies to the general population, this provision applies even where the manipulation threshold might be lower. The vulnerability of the affected group means that less aggressive techniques can still cross the line into prohibited territory.

3. Social Scoring by Public Authorities

Article 5(1)(c) prohibits AI systems used by public authorities (or on their behalf) for evaluating or classifying natural persons over a certain period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to:

  • Detrimental or unfavourable treatment of persons in social contexts unrelated to the contexts in which the data was originally generated or collected, or
  • Detrimental or unfavourable treatment that is unjustified or disproportionate to their social behaviour or its gravity

What This Covers

This is the "China-style social credit system" ban. It prevents public authorities from:

  • Aggregating data from multiple domains of a person's life (financial behaviour, social media activity, travel patterns) into a single behavioural score that determines access to public services
  • Using AI to create citizen trustworthiness ratings that affect rights or entitlements
  • Deploying predictive policing systems that assign risk scores to individuals based on social behaviour and use those scores to justify pre-emptive restrictions

What It Does Not Cover

The ban is specifically targeted at public authorities. Private-sector credit scoring, insurance risk assessment, and similar practices are not prohibited under this provision — though they may be classified as high-risk under Annex III and subject to strict requirements.

Additionally, the ban requires that the detrimental treatment be either (a) in an unrelated context or (b) disproportionate. A public authority using AI to assess relevant behaviour in the same context (for example, tax compliance data to assess tax risk) would not automatically fall under this prohibition.

The line between legitimate risk assessment and prohibited social scoring can be thin. Organisations working with public authorities on AI-driven assessment systems should seek legal analysis to confirm that their specific use case does not cross into prohibited territory.

4. Individual Predictive Policing

Article 5(1)(d) prohibits AI systems that make risk assessments of natural persons to assess or predict the risk of a person committing a criminal offence, based solely on the profiling of a person or on assessing their personality traits and characteristics.

What This Covers

This bans pure profiling-based predictive policing — systems that flag individuals as likely criminals based on who they are rather than what they have done:

  • AI systems that predict an individual's likelihood of committing a crime based on demographic data, personality assessments, or social characteristics
  • Risk scoring tools that assess criminal propensity based on neighbourhood, family history, or socioeconomic indicators alone

The Critical Exception

The prohibition does not apply to AI systems that support human assessment of involvement in criminal activity based on objective and verifiable facts directly linked to criminal activity. In other words, AI that assists in analysing evidence of actual criminal behaviour is permitted; AI that predicts future criminality based on personal characteristics alone is not.

Building AI systems that meet EU standards?

Ctrl AI provides auditable execution traces and trust-tagged outputs, helping you demonstrate compliance with EU AI Act requirements from day one.

See How Ctrl AI Works

5. Untargeted Scraping for Facial Recognition Databases

Article 5(1)(e) prohibits AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or from CCTV footage.

What This Covers

This provision directly targets practices like those of Clearview AI, which built a facial recognition database by scraping billions of images from social media and the public internet:

  • Crawling social media platforms, news sites, or public websites to collect facial images for biometric database construction
  • Harvesting CCTV footage to build identification databases without the knowledge or consent of the individuals captured
  • Any systematic, non-targeted collection of facial images to train or populate facial recognition systems

What It Does Not Cover

The ban is on untargeted scraping. Facial recognition databases built from images collected with proper legal basis — for example, mugshot databases maintained by law enforcement under specific legal authority, or databases built from images provided with informed consent — are not prohibited under this provision. However, they may be subject to GDPR requirements and other AI Act obligations.

6. Emotion Recognition in Workplaces and Education

Article 5(1)(f) prohibits AI systems that infer the emotions of a natural person in the areas of workplace and education, except where the AI system is intended to be put in place or used for medical or safety reasons.

What This Covers

This bans the use of emotion recognition AI in two specific contexts:

  • Workplaces: AI systems that monitor employees' facial expressions, voice tone, or physiological signals to assess their emotional state — for performance evaluation, engagement tracking, or management purposes
  • Educational institutions: Systems that track students' emotions during classes or examinations — whether for attention monitoring, engagement scoring, or behavioural assessment

The Medical and Safety Exception

Emotion recognition is permitted in workplaces and educational settings when used for:

  • Medical purposes: Systems that detect signs of distress, fatigue, or medical conditions (such as monitoring a truck driver's alertness or detecting signs of a student's medical emergency)
  • Safety reasons: Systems designed to ensure physical safety (such as detecting drowsiness in heavy machinery operators)

Emotion recognition AI used outside of workplaces and education — for example, in market research with informed consent, or in clinical therapy settings — is not prohibited under Article 5. However, it is classified as a limited-risk system under Article 50 and is subject to transparency obligations: individuals must be informed that they are being subjected to emotion recognition.

7. Biometric Categorisation for Sensitive Attributes

Article 5(1)(g) prohibits AI systems that categorise natural persons based on their biometric data to deduce or infer sensitive characteristics, specifically:

  • Race or ethnic origin
  • Political opinions
  • Trade union membership
  • Religious or philosophical beliefs
  • Sex life or sexual orientation

What This Covers

This bans AI systems that analyse facial features, gait, voice patterns, or other biometric data to classify people into categories based on protected characteristics:

  • Facial analysis systems that claim to determine a person's ethnicity, religion, or sexual orientation
  • Voice analysis tools that attempt to infer political leanings
  • Gait analysis systems that categorise people by race

What It Does Not Cover

The prohibition applies to biometric categorisation — systematic classification of individuals. It does not prohibit all processing of biometric data. For example, biometric data processing for identity verification purposes (such as fingerprint or facial recognition for authentication) falls outside this provision, though it remains subject to GDPR and potentially high-risk AI system requirements.

The regulation also carves out an exception for labelling or filtering of lawfully acquired biometric datasets in the area of law enforcement — for example, filtering images by hair colour to narrow a search in a criminal investigation.

8. Real-Time Remote Biometric Identification in Public Spaces

Article 5(1)(h) prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, subject to specific exceptions.

What This Covers

This is the most debated prohibition in the regulation. It bans live facial recognition and other real-time biometric identification in public spaces by or for law enforcement:

  • Live CCTV systems that automatically identify individuals by matching their faces against a watchlist
  • Real-time pedestrian tracking using gait recognition or other biometric identifiers
  • Mobile biometric identification devices used by police to identify people in public in real time

The Narrow Exceptions

Unlike other prohibitions, this one comes with carefully defined exceptions. Real-time remote biometric identification in public spaces by law enforcement is permitted only for:

  1. Targeted search for victims of abduction, trafficking, or sexual exploitation, and search for missing persons
  2. Prevention of a specific, substantial, and imminent threat to the life or physical safety of persons, or a genuine and present or foreseeable threat of a terrorist attack
  3. Identification of a suspect of a criminal offence punishable by a custodial sentence of at least four years (for specific offences listed in the regulation)

Safeguards for Exceptions

Even where these exceptions apply, Article 5(2) imposes strict conditions:

  • Each use must be authorised by a judicial authority or an independent administrative authority (with ex post judicial review within 24 hours in cases of urgency)
  • The use must be necessary and proportionate — the authority must conduct a fundamental rights impact assessment
  • The use must be limited in time, geographic scope, and the number of persons targeted
  • Each use must be notified to the relevant market surveillance authority and data protection authority
  • Each use must be registered in the EU database for high-risk AI systems

The exceptions to the real-time biometric identification ban are extremely narrow and subject to multiple layers of authorisation. Any law enforcement agency considering the use of such systems must ensure strict compliance with all procedural requirements. Unauthorised use triggers Tier 1 penalties.

How Prohibited Practices Interact with the Risk Classification

The EU AI Act's risk-based framework places prohibited practices at the apex:

Unacceptable Risk Prohibited outright — no amount of compliance can make these lawful.

High Risk Subject to strict requirements but permitted with safeguards.

Limited Risk Primarily subject to transparency obligations.

Minimal Risk No specific obligations (voluntary codes of conduct encouraged).

The critical distinction is that high-risk systems can become compliant through technical and organisational measures. Prohibited practices cannot. If your AI system falls under Article 5, the only compliant course of action is to not deploy it — or to fundamentally redesign it so it no longer meets the prohibition criteria.

What Organisations Should Do Now

The prohibitions have been enforceable since February 2, 2025. Organisations should take immediate action:

1. Audit Your AI Portfolio

Review every AI system you develop, deploy, or distribute. For each system, assess whether any aspect of its functionality could fall within the Article 5 prohibitions. Pay particular attention to:

  • Systems involving biometric data processing
  • Systems that personalise content or recommendations in ways that could be considered manipulative
  • Systems used in HR, education, or public-sector decision-making
  • Systems that profile or categorise individuals

2. Document Your Analysis

For any system that operates in an area adjacent to a prohibition (for example, a recommendation system, an HR analytics tool, or a biometric authentication system), document your analysis of why it does not fall within the prohibited categories. This documentation will be invaluable if your classification is ever questioned by a regulator.

3. Implement Technical Safeguards

Where a system could theoretically be repurposed for a prohibited use, implement technical controls that prevent such repurposing. For example, if you provide facial recognition technology for identity verification, ensure that your system architecture prevents its use for untargeted scraping or biometric categorisation of sensitive attributes.

4. Train Your Teams

Ensure that product managers, engineers, data scientists, and procurement teams understand the prohibited practices and can identify potential issues early in the development or procurement cycle.

The prohibitions in Article 5 are not merely regulatory hurdles — they reflect a broad consensus about the limits of acceptable AI use. Organisations that align their AI practices with these values are not just avoiding penalties; they are building AI systems that their users, employees, and the public can trust.

Conclusion

The EU AI Act's prohibited practices represent the regulation's clearest statement of principle: some uses of AI are simply unacceptable in a democratic society. From social scoring to manipulative subliminal techniques, from untargeted facial recognition scraping to emotion surveillance in schools and workplaces, these prohibitions define the outer boundaries of lawful AI use in Europe.

With enforcement already active and penalties reaching up to 7% of global turnover, understanding these prohibitions is not optional. Every organisation touching AI in the EU market must know exactly where the red lines are — and ensure they are on the right side of them.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles