Hero image is MarkMyAI-verified
EU AI ActFebruary 19, 2026 · 8 min read

EU AI Act Article 50: What Every Publisher Needs to Know Before August 2026

The deadline is August 2, 2026. Here's what Article 50 actually requires, what "machine-readable marking" means in practice, and why most AI-generated images are not compliant today — even if they already have C2PA metadata.

TL;DR

  • • From August 2, 2026, AI-generated images must be machine-readably marked (EU AI Act Art. 50).
  • • "Machine-readable" means more than a watermark or a label — it means verifiable provenance tied to the publisher.
  • • The AI tool (DALL·E, Midjourney) marks the image as AI-generated. That is not enough. The publisher must add their signature.
  • • C2PA from the AI tool gets stripped when you share the image via WhatsApp, Instagram, or most social platforms.
  • • A publisher-signed audit trail + remote manifest survives stripping. That's what MarkMyAI does.

What is Article 50, exactly?

When we started working on MarkMyAI, our initial read of the law was straightforward: AI tools embed C2PA, publishers add a label, done. It took us an embarrassingly long time to realize that those are two entirely separate obligations on two different legal entities — and that the AI tool's compliance doesn't cover yours at all.

Article 50 of Regulation (EU) 2024/1689 — the EU AI Act — imposes transparency obligations on providers and deployers of certain AI systems. It entered into force on August 1, 2024 and its transparency obligations apply from August 2, 2026.

The key paragraph for publishers is Article 50(2):

"Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

And for deployers (i.e. you, the publisher), Article 50(4) adds:

"Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated."

Two different obligations: the AI provider must mark, the deployer must disclose. If you use DALL·E to create a campaign image, you are a deployer. The obligation is yours.

What does "machine-readable marking" actually mean?

The law itself doesn't specify the technical format — it delegates that to a Code of Practice. The European Commission published the first draft on December 17, 2025. A second draft is expected in March 2026, with finalization toward June 2026. We're watching both drafts closely, because some of what we've already built may need adjustment once the final version lands.

The draft Code of Practice explicitly adopts a multi-layered approach because no single technique is sufficient. It identifies four techniques:

1

Metadata

Digitally signed provenance information linked to the moment of generation — the C2PA standard. Must be integrity-verified.

2

Imperceptible watermarking

Woven into the pixel data of the image. Must survive compression and basic editing.

3

Fingerprinting or logging

Fallback mechanism — perceptual hash matching or server-side audit log that can identify the image even after metadata stripping.

4

Provenance certificate

For content where embedding is difficult — a remote certificate that proves origin.

For v1, MarkMyAI implements layers 1, 3, and 4 — C2PA embedded manifest, fingerprint database, and remote manifest. Layer 2 (invisible watermarking) is on the roadmap for v2, but we've deliberately not rushed it: invisible watermarking is an area where the marketing is significantly ahead of the robustness guarantees. We'd rather ship what we know works.

The crucial misunderstanding: AI tool marking ≠ publisher compliance

Here's a concrete example. We run petportraitgift.com — customers upload a photo of their pet, we generate a stylized watercolor portrait using AI and deliver it automatically. Fully AI-generated images going directly to end users. When we first checked our compliance, we saw that the AI model's C2PA metadata was intact. Green checkmark on contentcredentials.org. We thought we were fine.

We weren't. The AI tool had signed the image as its own output. We — the deployer, the publisher, the company with actual legal responsibility under Article 50(4) — had added nothing. Not a signature, not an audit record, nothing. This is where most publishers make the same mistake.

When an AI tool (OpenAI, Adobe Firefly, Midjourney) embeds C2PA metadata, the manifest says: "This image was generated by [AI Tool]." That's the AI provider fulfilling their obligation under Article 50(2).

Your obligation as a deployer under Article 50(4) is different: you must disclose that you are publishing AI-generated content. The C2PA manifest from the AI tool carries no information about you — your name, your organization, your intent, your publication date.

What the AI tool's C2PA says:

claim_generator: "dall-e-3", signer: "OpenAI"

What your compliance needs to say:

claim_generator: "MarkMyAI/1.0", signer: "Acme Corp", published: "2026-03-06", purpose: "marketing"

The EU AI Act wants a chain of accountability — from the AI tool to the publisher. The AI tool's signature covers the generation. Your signature covers the publication. Both are required.

The WhatsApp problem: why embedded metadata isn't enough

Even if you add publisher-signed C2PA metadata, there's a fundamental problem: metadata gets stripped.

We tested this directly. We took a DALL·E image with full C2PA metadata intact — ContentCredentials.org showed all claims correctly. We sent it via WhatsApp. The recipient uploaded the same image to our checker. Result:

ORIGINAL (before WhatsApp)

✅ C2PA: FOUND

✅ XMP: FOUND

✅ Signer: identifiable

AFTER WhatsApp

❌ C2PA: NOT FOUND

❌ XMP: STRIPPED

❌ Signer: unknown

WhatsApp compresses and re-encodes images, stripping all metadata in the process. Instagram does the same. X (Twitter) does the same. In practice, the moment an AI-generated image enters a social sharing pipeline, the embedded C2PA is gone.

The Code of Practice acknowledges this: it expects a fingerprinting/logging fallbackprecisely because metadata stripping is the default behavior of most distribution channels.

The two-layer solution

MarkMyAI implements compliance as two independent layers that work together:

🔐

Layer 1: Embedded C2PA Manifest

A publisher-signed C2PA claim is embedded directly into the image file. It carries your organization name, AI model used, publication date, and intent. Readable by Adobe Inspect, ContentCredentials.org, and any C2PA-compliant tool.

Limitation: Gets stripped by social media, messaging apps, and most image CDNs.

🗄️

Layer 2: Audit Trail + Fingerprint Database

Every marked image generates a perceptual fingerprint (64-bit hash based on visual content) and a tamper-proof audit log entry with a hash chain. A permanent verify URL lets anyone prove the image's provenance — even after all metadata has been stripped.

Survives: WhatsApp, Instagram, X, JPEG re-compression, moderate resizing.

Together, these two layers provide the combination of embedded provenance and persistent remote evidence that the Code of Practice describes.

Who needs to comply, and by when?

WhoObligationDeadline
AI Tool providers (OpenAI, Midjourney, Adobe)Machine-readable marking of outputsAugust 2, 2026
Publishers / Deployers using AI for imagesDisclose AI-generated content, add publisher signatureAugust 2, 2026
News organizationsDisclose AI text/images published on matters of public interestAugust 2, 2026
Marketing agenciesAny AI-generated visuals shared publiclyAugust 2, 2026

Exceptions exist for clearly artistic, satirical, or fictional works where disclosure is still required but can be made "in an appropriate manner that does not hamper the display or enjoyment of the work." Human editorial control may reduce the obligation for text, but not for images.

What happens if you don't comply?

The EU AI Act provides for significant fines. For violations of transparency obligations (Article 50), penalties can reach up to €15 million or 3% of global annual turnover, whichever is higher.

Beyond financial penalties, organizations that cannot demonstrate provenance of AI-generated images face reputational risk — particularly as regulators build enforcement capacity and as content authenticity tooling becomes mainstream.

A practical checklist for August 2026

Identify all workflows where AI-generated images are created and published

Confirm your AI tool provider offers C2PA marking (OpenAI, Adobe Firefly, and Stability AI do)

Add publisher-signed C2PA manifest via MarkMyAI (or equivalent) before publishing

Store the verify_url alongside every published AI image in your CMS

Test: upload your published image to /check — does it show your signature?

Test: send the image via WhatsApp, re-upload to /check — does the audit trail still show your provenance?

Document your process: you need to be able to demonstrate editorial control

The bottom line

Article 50 is not about slapping a "Made with AI" label on an image. It's about building a verifiable, tamper-proof chain of provenance — from the AI tool that generated the image to the publisher who distributed it.

The technology to do this (C2PA, perceptual fingerprinting, audit trails) already exists and is production-ready. The deadline is in about 150 days. The question isn't whether you need to comply — it's whether your current publishing workflow can produce an audit trail that withstands regulatory scrutiny.

Check your AI images right now

Upload any image to our free compliance checker. See if it has C2PA metadata, who signed it, and whether it's publisher-signed — or just AI-tool signed.

Analytics Consent

We use Google Analytics 4 only if you agree, to understand which pages bring traffic and where visitors drop off. No advertising features are enabled. You can change your choice at any time in the privacy settings.