Recommendation Systems and the EU AI Act
How the EU AI Act applies to recommendation systems — when they are high-risk, the Article 5 manipulation prohibition, DSA recommender transparency, and the practical compliance path for platforms.
Recommendation systems are arguably the most consequential AI deployed at scale today. The algorithms that shape what content appears on social media feeds, what products are surfaced on e-commerce sites, what videos auto-play, and which posts are amplified or de-prioritised influence the daily decisions of hundreds of millions of EU citizens.
The EU AI Act regulates these systems, but more lightly than many expect. The more demanding regime in practice is the Digital Services Act, which imposes specific recommender-system transparency, opt-out, and (for VLOPs) systemic-risk obligations. The GDPR adds a third layer through its profiling and automated-decision-making provisions.
This article walks through all three regimes as they apply to recommendation systems, and explains the specific scenarios where AI Act high-risk classification or Article 5 prohibition can apply.
The AI Act Classification: Most Recommenders Are Minimal- or Limited-Risk
Recommendation systems do not appear by name in Annex III. They become high-risk only when their deployment falls within one of the eight Annex III categories. For the vast majority of recommenders — product recommendations on retail sites, content recommendations on streaming services, music recommendations, news-feed personalisation, search re-ranking — none of those categories apply directly.
This puts recommender systems in the minimal-risk category by default. Minimal-risk systems have no specific AI Act obligations beyond the general legal framework (Article 5 prohibitions and GPAI obligations if applicable).
A recommender becomes limited-risk if it directly interacts with users in a way that triggers Article 50. For instance, an AI assistant that recommends actions to users via conversation is subject to Article 50(1) disclosure obligations. Pure ranking-style recommendation, where users see ranked lists without conversational interaction, generally does not trigger Article 50.
A recommender becomes high-risk when it is deployed in an Annex III area. The most common scenarios:
- Annex III, point 3 (education): an AI recommender that determines what courses, programmes, or resources a student can access in a way that influences access to education
- Annex III, point 4 (employment): an AI recommender that surfaces candidates to recruiters, ranks job applicants, or influences hiring decisions (AI in hiring)
- Annex III, point 5 (essential services): an AI recommender used to determine which products or services are made available to specific individuals where this affects access to essential services
- Annex III, point 6 (law enforcement): an AI recommender that ranks individuals by risk in a law-enforcement context (most uses here are also restricted by Article 5)
- Annex III, point 8(b) (democratic processes): a recommender specifically designed to influence election outcomes
For each, the full Article 8–15 regime applies.
The high-risk classification depends on how the recommendation is used, not on the technical sophistication of the algorithm. A simple rule-based recommender used in a high-risk context is high-risk; a state-of-the-art neural recommender used in a minimal-risk context is minimal-risk. Pay attention to the deployment scenario.
Article 5(1)(a) — When Recommenders Become Prohibited
Article 5(1)(a) prohibits AI systems that:
deploy subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.
The threshold is high: significant harm caused by manipulation. Standard personalised recommendation is not captured. But certain recommender patterns raise concerns:
- Dark patterns enhanced by personalisation. A recommender that personalises the timing, intensity, or framing of dark-pattern interventions (false-urgency banners, hidden costs, manipulated default options) could potentially be captured if it materially distorts behaviour and causes significant harm.
- Predatory targeting of vulnerable users. Article 5(1)(b) separately prohibits AI systems that exploit vulnerabilities of specific groups (age, disability, social or economic situation) to cause significant harm. A recommender system that specifically targets users showing signs of gambling addiction, financial distress, or eating disorders with content that exacerbates the condition could be captured.
- Manipulative political recommendation. Recommenders specifically designed to distort political behaviour in a manner causing significant harm could potentially fall under Article 5(1)(a). The threshold for "significant harm" is contested in this area.
In practice, Article 5(1)(a) enforcement is more likely to capture egregious manipulation than ordinary personalisation. Standard advertising recommendation, content discovery, and product surfacing are not banned.
DSA Article 27 — Recommender Transparency for All Online Platforms
The Digital Services Act applies to a much broader population of recommender systems than the AI Act does. Article 27 of the DSA requires:
Providers of online platforms that use recommender systems shall set out in their terms and conditions, in plain and intelligible language, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters.
Three obligations:
- Plain-language explanation of the main parameters used in the recommender
- Disclosure of options users have to modify or influence those parameters
- Information must be in the terms and conditions (or in another accessible place, with a link from the terms)
The "main parameters" disclosure is substantive. Platforms should explain factors like recency, popularity, engagement signals, content category preferences, social signals, and the relative weight of each. Vague boilerplate (e.g., "we use AI to personalise your experience") generally does not satisfy Article 27.
DSA Article 38 — VLOP and VLOSE Opt-Out Right
For Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), Article 38 adds a stronger obligation:
Providers of very large online platforms and of very large online search engines that use recommender systems shall provide at least one option for each of their recommender systems which is not based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679.
VLOPs must offer at least one recommender option that does not use profiling. In practice, this typically means a chronological or non-personalised feed option, accessible from a clearly marked menu.
The VLOP designations as of 2026 include major social media platforms, marketplaces, search engines, and content-sharing platforms — companies with at least 45 million monthly active recipients in the EU.
Need auditable AI for compliance?
Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.
Learn About Ctrl AIDSA Article 26 — Advertising Transparency
Where recommendation overlaps with advertising, DSA Article 26 requires platforms to label advertisements clearly and to inform users about:
- Why the ad is shown to them
- Which natural or legal person paid for the advertisement
- The main parameters used to determine the targeting
For AI-driven ad targeting, platforms must explain the targeting parameters in user-accessible form. Behavioural-advertising practices that fail to provide this transparency are non-compliant.
DSA Article 34–37 — VLOP Systemic Risk Assessment
For VLOPs, recommender systems are central to the systemic-risk assessment regime:
- Article 34 requires VLOPs to assess systemic risks stemming from the design or functioning of their service, including their recommender systems
- Article 35 requires risk mitigation measures, which can include changes to recommender systems
- Article 37 requires an independent audit of compliance, including the recommender system's design choices
Systemic risks specifically called out in Article 34 include:
- Dissemination of illegal content
- Negative effects on fundamental rights
- Negative effects on civic discourse, electoral processes, and public security
- Negative effects on protection of minors, gender-based violence, public health, mental and physical well-being
For a VLOP, the recommender system is the single largest determinant of how systemic risks play out on the platform. Risk assessments must address it specifically.
GDPR — Profiling and Automated Decision-Making
The GDPR adds a third regulatory layer. Two provisions matter most:
GDPR Article 22 — Right Not to Be Subject to Solely Automated Decisions
Users have the right not to be subject to a decision based solely on automated processing — including profiling — that produces legal effects concerning them or similarly significantly affects them. Exceptions apply (necessary for a contract, authorised by law, or with explicit consent).
For most recommender systems, Article 22 does not apply because recommendations do not produce legal or similarly significant effects. Recommending a product is not a "decision" in the Article 22 sense. But recommenders integrated into high-stakes decisions (credit, insurance, employment, healthcare) can trigger Article 22, requiring an opt-out path and human-review provisions.
GDPR Article 21 — Right to Object to Profiling
Users have the right to object to processing of personal data based on legitimate interest or public interest, including profiling. Article 21(2) provides an absolute right to object to processing for direct marketing purposes — including profiling for direct marketing.
For recommender systems used in direct marketing (which most retail and content recommenders effectively are), users must be able to opt out without justification.
Children's Recommender Systems
Recommendation systems serving children face additional restrictions:
- AI Act Article 5(1)(b) prohibits exploitation of age-based vulnerabilities causing significant harm
- DSA Article 28 requires platforms accessible to minors to put in place appropriate measures for minors' privacy, safety, and security
- GDPR Article 8 requires parental consent for processing children's data in information-society services in many cases
- AVMSD (Audiovisual Media Services Directive) provides additional protections in audiovisual contexts
- National laws (notably the UK's Children's Code, but with EU equivalents) impose further restrictions
In practice, recommender systems for children should default to non-engagement-maximising designs, with strict limits on personalisation based on profiling.
Specific Deployment Scenarios
E-Commerce Product Recommendations
Standard product recommendations on a retail site:
- AI Act: minimal-risk; Article 50 not directly applicable
- DSA: Article 27 transparency (if the retailer is an online platform), Article 26 advertising disclosure for sponsored recommendations
- GDPR: Article 21 opt-out for direct marketing
- Practical compliance: DSA transparency documentation; GDPR opt-out path
Social Media Feed Ranking
A VLOP's news-feed ranking algorithm:
- AI Act: minimal-risk by default; Article 5 backstop for any manipulative design
- DSA: full Article 27, 34–37, 38 regime including non-profiling option
- GDPR: Article 21 right to object; potentially Article 22 if the ranking has significant effects (e.g., shadow-banning equivalent)
- Practical compliance: substantial DSA documentation and audit; recommender system architecture must support non-profiling option
Video Auto-Play Recommendations
YouTube-style auto-play and related-video recommendation:
- AI Act: minimal-risk; Article 5 backstop for manipulative design (particularly relevant for recommendations targeting minors)
- DSA: full VLOP regime; Article 28 protections for minors
- GDPR: profiling and minors' protections
- Practical compliance: VLOP-level compliance; specific protections for minor users
Job Candidate Ranking for Recruiters
Algorithmic ranking of candidates surfaced to recruiters:
- AI Act: high-risk under Annex III, point 4 (employment)
- DSA: applies if the platform is in the recruitment intermediary space
- GDPR: Article 22 likely applies (employment decisions have significant effects)
- Practical compliance: full Article 8–15 high-risk regime, hiring AI compliance, GDPR opt-out, human-review path
Course Recommendation in Online Learning Platforms
AI recommending courses or study paths to learners:
- AI Act: high-risk under Annex III, point 3 if the recommendation determines access to education; minimal-risk if purely advisory
- DSA: Article 27 transparency
- GDPR: Article 21 and potentially Article 22
- Practical compliance: classification analysis is critical; if access-determining, full high-risk regime
Music Streaming Recommendation
Personalised playlist generation:
- AI Act: minimal-risk
- DSA: Article 27 if the streaming service qualifies as an online platform
- GDPR: Article 21 opt-out
- Practical compliance: lightweight DSA disclosure and GDPR controls
Compliance Checklist for Recommender System Operators
- Classify by deployment context. Is the recommender in an Annex III area? If yes, plan for high-risk compliance. If no, focus on DSA and GDPR.
- Verify Article 5 does not apply. Recommender patterns that manipulate behaviour to cause significant harm are prohibited. Targeting vulnerabilities of children, financially distressed users, or other protected groups is prohibited.
- Document main parameters for DSA Article 27. Explain in plain language what factors drive recommendations and what user controls are available.
- Provide user controls. Allow users to view and adjust recommendation parameters; for VLOPs, provide a non-profiling option.
- Comply with GDPR opt-out rights. Honour Article 21 objections, particularly for direct-marketing profiling.
- Consider Article 22 GDPR. If recommendations are integrated into significant decisions, provide human-review paths.
- For VLOPs/VLOSEs, conduct the Article 34 systemic-risk assessment with specific attention to recommender impacts.
- For child-facing services, design recommender systems for safety, with strict profiling limits.
- Apply Article 50 where applicable. Conversational recommender systems should disclose their AI nature.
- Coordinate with the GPAI provider documentation if the recommender uses general-purpose AI models.
Conclusion
Recommendation systems are regulated in the EU through a layered framework: the AI Act sets baseline rules (mostly applying when recommenders enter high-risk areas or cross into manipulation), the DSA provides recommender-specific transparency and opt-out obligations, and the GDPR adds profiling-specific user rights. For most commercial recommenders, the DSA is the operative regime; for high-stakes recommenders integrated into employment, education, or essential-services decisions, the AI Act's high-risk provisions also apply.
For broader context on the regulation, see the complete EU AI Act overview. For more on how the Article 5 prohibitions apply across AI use cases, see prohibited AI practices under the EU AI Act.
Frequently Asked Questions
Are recommendation systems high-risk under the EU AI Act?
Does the EU AI Act ban manipulative recommendation systems?
How does the Digital Services Act regulate recommender systems?
Do users have a right to opt out of personalisation?
Are recommendations to children regulated more strictly?
Make Your AI Auditable and Compliant
Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.
Explore Ctrl AIRelated Articles
AI Content Moderation and the EU AI Act
How the EU AI Act applies to AI-driven content moderation systems — risk classification, transparency obligations, interaction with the Digital Services Act, and the practical compliance path for platforms.
Biometric AI and the EU AI Act: Identification, Verification, and Categorisation
How the EU AI Act regulates biometric AI — Article 5 prohibitions on real-time remote ID and sensitive-attribute categorisation, Annex III high-risk classification, and the practical compliance path.
Chatbots and the EU AI Act: What Compliance Looks Like
How the EU AI Act applies to chatbots and conversational AI — Article 50 transparency obligations, when chatbots become high-risk, and the practical disclosure requirements you must implement.