Use Casesdeepfakessynthetic-contentarticle-50

Deepfakes and the EU AI Act: Labelling, Detection, and Compliance

How the EU AI Act regulates deepfakes — Article 50(4) marking obligations, Article 5 prohibitions on manipulation, and what providers, deployers, and platforms must do to stay compliant.

May 12, 202611 min read

Deepfakes — synthetic media that depicts real or fictional people doing or saying things that never happened — are one of the most-watched AI use cases by regulators. The EU AI Act addresses them through three overlapping mechanisms: a technical-marking obligation on the model provider, a user-facing disclosure obligation on the deployer, and a backstop in Article 5 prohibitions for the worst manipulative or deceptive uses.

This article walks through each obligation in detail, explains the carve-outs for art and journalism, and outlines what providers and deployers should do to comply.

The Definition of a "Deepfake"

Article 3(60) of Regulation (EU) 2024/1689 defines a deepfake as:

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

Several features of this definition deserve attention:

  • "AI-generated or manipulated" captures both fully synthetic outputs and AI-modified versions of real footage (face swaps, voice cloning over real video, etc.).
  • "Resembles existing persons, objects, places, entities or events" means the deepfake regime applies whether the depicted subject is real (a political leader) or fictional but realistic (a generated face of a non-existent person presented as real).
  • "Would falsely appear to a person to be authentic or truthful" is the operative test. Stylised or clearly synthetic outputs — cartoon avatars, obviously animated characters — are not deepfakes for the purposes of Article 50.

Note that the deepfake definition is narrower than "AI-generated content" generally. AI-generated text, for instance, is not a deepfake under Article 3(60), though Article 50(4) does impose a separate disclosure obligation on certain AI-generated text used in public-interest communications.

The Three Layers of Regulation

Layer 1: Article 50(2) — Provider Marking Obligation

Article 50(2) places a technical obligation on providers of AI systems generating synthetic content:

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.

In plain English: if you operate a model or system that produces synthetic content, you must mark the output so it can be detected as AI-generated by machines, not just by humans.

Practical implementations include:

  • C2PA Content Credentials — cryptographic content provenance metadata standardised by the Coalition for Content Provenance and Authenticity
  • Cryptographic watermarks embedded into images, audio, or video
  • Robust detector models trained against your generator's outputs
  • Metadata tags under recognised standards (EXIF, XMP, etc.)

The standard required is "as far as technically feasible." A perfectly tamper-proof watermark is not achievable today; the obligation is to use the state of the art reasonably, not to achieve absolute robustness.

Article 50(2) does not apply to AI systems that assist standard editing or merely perform "minor" changes to existing content. The line between "minor edit" and "manipulation" is fact-specific.

Layer 2: Article 50(4) — Deployer Disclosure Obligation

Article 50(4) addresses the user-facing side:

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deepfake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations […] are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

This is the user-visible label requirement. The deployer — typically the person or platform publishing the deepfake — must inform viewers that the content is artificially generated or manipulated.

A separate sub-paragraph addresses AI-generated text:

Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.

For news media, the editorial-review exception is significant. Newsrooms that use AI drafting tools but apply human editorial control retain the safe harbour. Wholly automated public-interest text generation (for example, an automated newsroom or AI-generated press release) is captured.

Layer 3: Article 5 — Prohibitions

If a deepfake is used for manipulative or deceptive purposes, it may be prohibited outright under Article 5(1)(a):

Placing on the market, putting into service or use of AI systems that deploy subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.

A deepfake-powered phishing or social-engineering campaign, a manipulative deepfake political ad designed to suppress turnout, or a deceptive deepfake used to defraud individuals could fall within this prohibition. The penalty tier is the highest in the regulation: up to €35 million or 7% of global turnover.

This means: routine deepfake creation and dissemination is regulated under Article 50, but deepfakes designed to cause significant harm through manipulation cross into the territory of outright prohibition.

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

The Artistic, Creative, Satirical, Fictional Carve-Out

Article 50(4) creates a special regime for "evidently artistic, creative, satirical, fictional or analogous works." The disclosure obligation still applies — it cannot be waived — but the form of disclosure is adjusted so that it "does not hamper the display or enjoyment of the work."

Practical interpretations:

  • Film, television, and series: end-credits attribution naming AI-generated or AI-manipulated content is normally sufficient
  • Music with synthetic vocals or instrumentation: liner notes, track metadata, or platform-level labelling can suffice
  • Satirical content on social media: a caption or hashtag (e.g., #deepfake, #aigenerated) is generally adequate
  • Art exhibitions: wall text or catalogue notation is sufficient

The key qualifier is "evidently." If a satirical deepfake is so realistic that an ordinary observer might mistake it for real footage, the artistic carve-out may not apply and standard disclosure is required.

Specific High-Risk Scenarios

Deepfakes of Political Figures

Article 50(4) applies fully to political deepfakes. Combined with Annex III, point 8(b) (AI systems intended to influence the outcome of an election or referendum), political deepfakes can simultaneously be:

  • Subject to Article 50(4) disclosure (limited-risk transparency obligation)
  • High-risk under Annex III, point 8(b) (substantive Articles 8–15 compliance)
  • Prohibited under Article 5(1)(a) if designed to materially distort behaviour and cause significant harm

The political-deepfake area is the most regulated AI-content area in the EU. National election laws and DSA obligations add further requirements.

Deepfakes in Pornography (Non-Consensual Intimate Imagery)

Sexually explicit deepfakes of real people without consent are addressed by multiple regimes. The AI Act's Article 50 disclosure requirement applies, but other instruments — the Directive on Combating Violence Against Women, national criminal codes, and the Digital Services Act for platforms — generally provide more stringent rules. Article 5(1)(a) may apply where such content is used as part of harassment campaigns. Platform-level obligations under the DSA frequently provide the most direct enforcement path.

Voice Cloning

Audio deepfakes — voice cloning of real people — sit squarely within the Article 3(60) definition and Article 50(4) disclosure obligation. Common use cases include voice-over for media, accessibility (TTS in the voice of family members), and impersonation in fraud. The compliance posture is the same: technical marking by the model provider, user-facing disclosure by the deployer, and Article 5 backstop for manipulative use.

AI-Generated News and Synthetic Journalism

AI-generated news content sits at the intersection of Article 50(4) (text used to inform the public) and broader media-regulation regimes. The editorial-review carve-out is critical: newsrooms that use AI drafting tools under human editorial control retain safe harbour. Fully automated public-interest text generation requires disclosure.

How to Comply: Provider Checklist

If you operate an AI system that generates or manipulates content covered by Article 50(2):

  1. Implement machine-readable marking. Choose an approach (C2PA Content Credentials, cryptographic watermark, robust detector, or metadata tagging). Prefer industry-standard solutions for interoperability.
  2. Test robustness. Verify that marking survives common transformations: cropping, compression, format conversion, screenshotting where applicable.
  3. Document the choice. Maintain a brief technical note describing the marking approach, its limitations, and why it represents the state of the art for your output type.
  4. Inform downstream providers. Under Article 53(1)(b) (for GPAI models) and as good practice generally, document the marking approach so deployers can comply with their Article 50(4) obligations.
  5. Monitor for adversarial circumvention. If a robust attack on your marking emerges, update your approach. The regulation requires effectiveness "as far as technically feasible," which evolves with the state of the art.

How to Comply: Deployer Checklist

If you are publishing AI-generated content or deepfakes:

  1. Identify which content is captured. Anything fitting the Article 3(60) deepfake definition triggers Article 50(4). Public-interest text generated without editorial review also triggers disclosure.
  2. Choose a disclosure pattern. Visible label on the content (badge, watermark, caption), end-card, on-screen notice, or accompanying text. The disclosure must be "clear and distinguishable" and provided "at the latest at the time of the first interaction or exposure."
  3. Tailor for context. Use the artistic carve-out where applicable, but ensure that the disclosure still happens — just in an adjusted form.
  4. Consider GDPR. A deepfake of a real person is processing of personal data and typically requires a lawful basis under Article 6 GDPR (often consent for non-public-figures, sometimes legitimate interest for satire of public figures with careful balancing).
  5. Document editorial review. If you are relying on the editorial-review carve-out for text, maintain workflow records demonstrating that natural or legal persons hold editorial responsibility.
  6. Plan for takedowns. Even when you have complied with Article 50, depicted individuals may have rights under GDPR, copyright, image rights, or national civil law. A documented process for handling complaints is good practice.

Interaction with the Digital Services Act

Platforms hosting user-generated deepfakes face additional obligations under the Digital Services Act:

  • Very Large Online Platforms (VLOPs) must assess and mitigate systemic risks including disinformation
  • Notice-and-action obligations require prompt response to reports of illegal content
  • Trusted-flagger frameworks prioritise certain notifiers' takedown requests

The AI Act and DSA are complementary, not duplicative. The AI Act sets baseline transparency for AI-generated content; the DSA addresses platform-level systemic-risk mitigation.

Practical Examples

Example 1. A consumer-facing AI app generates synthetic portraits from a user prompt. The app is the deployer; the model provider is upstream. The model provider must implement marking under Article 50(2); the consumer (or the platform hosting the output, depending on workflow) must disclose under Article 50(4) when the output qualifies as a deepfake under Article 3(60).

Example 2. A newsroom uses an AI tool to generate article drafts. A human editor reviews and signs off every published piece. Article 50(4) disclosure is not required for the published text due to the editorial-review carve-out — but if the same newsroom publishes an AI-generated video segment, that segment is a deepfake (Article 3(60)) and Article 50(4) disclosure applies.

Example 3. A film production company creates a deepfake of a deceased actor for a fictional film. Article 50(4) applies, but the artistic carve-out limits the form of disclosure to credits and accompanying material. GDPR and image-rights considerations may impose additional requirements separately.

Example 4. A political campaign creates a deepfake video of a competing candidate appearing to make damaging statements. Article 50(4) disclosure applies. The video may additionally be high-risk under Annex III, point 8(b) (election influence) — triggering full Articles 8–15 compliance — and, if designed to materially distort voter behaviour to cause significant harm, prohibited under Article 5(1)(a).

Conclusion

Deepfakes are not banned in the EU, but they are not unregulated either. The combination of provider-level marking obligations (Article 50(2)), deployer-level disclosure obligations (Article 50(4)), and the manipulation prohibition (Article 5(1)(a)) creates a layered regime that catches most realistic misuses without preventing legitimate creative, educational, or commercial use.

For a broader view of how transparency obligations work across all limited-risk AI systems, see transparency obligations under the EU AI Act. For more on how chatbots and other conversational AI interact with the same regime, see chatbots and the EU AI Act.

Frequently Asked Questions

Are deepfakes banned under the EU AI Act?

No, not as such. Deepfakes are regulated under Article 50(4) of the EU AI Act, which requires deployers to disclose when image, audio, or video content has been artificially generated or manipulated and constitutes a deepfake. The disclosure must be clear and distinguishable at the latest at the time of the first interaction or exposure. A deepfake that uses manipulative or deceptive techniques to cause significant harm may, however, separately fall under the Article 5 prohibition on manipulative AI.

Who is responsible for labelling deepfakes — the model provider or the platform?

Two layers of obligation apply. Article 50(2) requires providers of AI systems that generate synthetic content to ensure that the outputs are marked in a machine-readable format and detectable as artificially generated. Article 50(4) requires deployers — typically the platforms or end-users who publish the content — to disclose to viewers that the content has been artificially generated or manipulated. Both apply; the technical mark and the user-facing disclosure are distinct obligations.

Does the deepfake disclosure obligation apply to satire and art?

Article 50(4) exempts deepfakes that are 'evidently artistic, creative, satirical, fictional or analogous works.' However, the disclosure obligation still applies 'in a manner that does not hamper the display or enjoyment of the work.' In practice, this means a credit line, end-card, or attribution suffices for artistic works rather than an obstructive overlay.

What technical marking is required for AI-generated content?

Article 50(2) requires providers to ensure outputs are marked in a machine-readable format and detectable as artificially generated. The Commission is expected to adopt implementing acts specifying the marking techniques, but typical approaches include cryptographic watermarks (such as the C2PA Content Credentials standard), metadata tags, and machine-learning detection signatures. The marking must be sufficiently robust, interoperable, and effective.

Can I use AI-generated content in news articles?

Yes, but with disclosure. Article 50(4) specifically applies to AI-generated text content 'published with the purpose of informing the public on matters of public interest.' For such text, deployers must disclose that the text has been artificially generated or manipulated, unless the content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles