AI Images and the EU AI Act: What Publishers Actually Need to Know
August 2, 2026. That's when Article 50 of the EU AI Act becomes enforceable. If you publish AI-generated images — as a marketing team, a media outlet, a freelancer, or a business — this law applies to you. Here's what it actually requires, why the most common assumption is wrong, and what you can do about it. No jargon required.
The short version
- • EU AI Act Article 50 requires that AI-generated images remain machine-readably marked and verifiable — not just at creation, but through your entire publishing workflow.
- • Your AI tool (DALL·E, Midjourney, Firefly) signs the creation. You, the publisher, are legally responsible for the disclosure surviving distribution.
- • In practice, embedded metadata is destroyed by CMS, CDN, and social media pipelines — often within seconds of publishing.
- • Four independent protection layers — C2PA, invisible watermark, fingerprint database, blockchain anchor — fill this gap. No single layer survives everything. Together, they cover almost every real-world scenario.
What the EU AI Act actually says
On August 1, 2024, Regulation (EU) 2024/1689 — the EU AI Act — entered into force. The provisions most relevant to publishers are in Article 50, and they become enforceable on August 2, 2026.
The core obligation, in plain terms: if you publish AI-generated or AI-manipulated images, those images must be marked in a machine-readable format so that anyone — a browser, a compliance tool, a regulator — can detect that they are AI-generated. The marking must be verifiable. And it must survive long enough to be useful.
What the law is specific about
- ✅ AI-generated images must be machine-readably marked
- ✅ The marking must be verifiable by standard tools
- ✅ The publisher (deployer) is responsible — not just the AI tool provider
- ✅ Non-compliance can result in fines of up to €15 million or 3% of global annual turnover (Art. 99)
What the law leaves open
- — No specific technology is mandated (no required watermark format, no required metadata fields)
- — No prescribed data structure — C2PA is the de-facto standard but not legally mandated
- — Technical implementation details are delegated to implementing acts and standards bodies (CEN/CENELEC), which are still being finalized
The law defines the outcome. How you get there is your choice. The catch: most publishers assume the AI tool has already handled it. It hasn't — at least not completely.
→ Full legal breakdown: EU AI Act Article 50: What Every Publisher Needs to Know
The distinction most publishers miss: creator vs. publisher
When DALL·E generates an image, OpenAI embeds a C2PA digital signature into the file. This signature says: "This image was created by DALL·E, by OpenAI, at this timestamp." That is OpenAI fulfilling their obligation as the AI system provider.
But Article 50(4) places the compliance burden on the deployer — the entity that publishes and distributes the content. That's you. Your organization. Your marketing team or editorial department. The AI tool's signature covers the creation. Your signature covers the publication. Under the law, both are needed.
AI Tool (Creator)
Signs the creation of the image. Identifies the tool and its provider. Obligation fulfilled at generation time. This is OpenAI's, Adobe's, or Google's responsibility.
Publisher (You)
Responsible for ensuring the marking survives distribution. Identifies your organization as the publisher. Must remain verifiable after CMS, CDN, and social media processing. This is your responsibility.
In short: the AI tool did its part. Now you need to do yours.
→ Why this matters legally: Why Your AI Image Is Not Compliant — Even with C2PA
The uncomfortable truth: embedded metadata doesn't survive
Even if you add a perfect publisher signature to your image, there's a structural problem: the publishing infrastructure you rely on every day routinely destroys embedded metadata.
We tested this directly. A MarkMyAI-signed PNG image with a valid embedded C2PA publisher manifest was subjected to ten common publishing transformations:
C2PA survival rate across 10 common transformations
❌ PNG re-compression
❌ Resize to 50%
❌ Crop (center 80%)
❌ Rotate 90°
❌ Convert to JPEG (Q90)
❌ Convert to JPEG (Q50)
❌ Convert to WebP
❌ Metadata strip
❌ Social media pipeline
❌ Re-save in Photoshop
Result: 0 of 10 transformations. C2PA metadata was destroyed in every case.
This isn't a flaw in C2PA as a standard — it's a structural reality of how image processing works. CMS platforms like WordPress re-encode images to generate thumbnails. CDNs optimize images for bandwidth. Social platforms recompress everything on upload. Each of these steps discards the JUMBF containers that hold C2PA manifests.
C2PA is evolving to address this — its specification includes recovery mechanisms like remote manifests. But as of 2026, these recovery paths are not yet widely implemented in the pipelines publishers actually use.
→ See what happens in practice: We Sent an AI Image via WhatsApp. Here's What Happened. · Why CMS and CDN Pipelines Break AI Image Provenance
The solution: four independent protection layers
No single technology survives every real-world scenario. The answer is four independent layers — each solving a different piece of the problem, each covering for the others when one fails. Here's what each one does, in plain terms.
C2PA Publisher Signature — the label sewn into the garment
A cryptographic certificate embedded directly in the image file, following the open C2PA standard. It records who published the image, when, and that it was AI-generated. Any C2PA-compatible tool — Adobe, Google, Microsoft — can read it.
Think of it like the brand label sewn into a garment. Whoever finds it knows immediately: made by X, published by Y, on this date. As long as the label survives the journey, the proof is right there in the file.
✅ Strong when intact
Cryptographically verified publisher identity, timestamp, and AI generation status.
❌ Fragile in practice
Destroyed by any re-encoding step: CMS thumbnails, CDN optimization, social upload.
Invisible Watermark — UV ink on a banknote
The image pixels are modified imperceptibly — invisible to the human eye, but detectable by software. Using TrustMark, a neural watermarking model developed by Adobe Research (MIT-licensed), a unique reference token is embedded directly into the pixel data. This token links back to the original proof record.
Think of it like UV ink on a banknote. Without the right equipment, you see nothing. With a UV scanner, the hidden mark is immediately readable — even if the note has been folded, crumpled, or slightly worn.
✅ Survives real-world distribution
JPEG compression, resizing, format conversion (PNG → WebP), moderate cropping. In our real-world tests: 91% recovery rate across WordPress and Cloudinary pipelines.
❌ Has a physical limit
Images below ~150×150 px have too few pixels to reliably carry the signal. Screenshots and heavy manipulation combinations can also destroy it.
→ Technical detail: Our watermark benchmark results
Audit Trail + Perceptual Fingerprint — a visual identity card on file
When an image is marked, two things are stored on an external server: a cryptographic SHA-256 hash of the image data, and a perceptual hash — a compressed "visual summary" of what the image looks like, regardless of format or exact pixel values.
Think of it like a police photo-ID on file. Even if the suspect has aged, dyed their hair, or lost weight — the face is still recognisable. When someone uploads a compressed, resized version of your image to our checker, we compare its visual summary against every fingerprint in the database. If the match is close enough, we recover the original proof — even if both the C2PA and the watermark are gone.
✅ Independent of the image file
Works even when all embedded data has been stripped. Recovers provenance from thumbnails, screenshots, and re-encoded copies.
❌ Requires our server
This layer depends on the MarkMyAI database being online. If the service goes offline, this recovery path is unavailable. The Proof PDF serves as an offline backup.
Blockchain Anchor — the permanent notary
At the moment of marking, a transaction is written to the Polygon blockchain — a public, decentralised database that nobody controls and nobody can alter. The transaction records: the image's unique mark ID, its SHA-256 hash, its perceptual fingerprint, and a pseudonymized reference to the publisher and AI model used. The timestamp is set by the blockchain itself.
Think of it like a notarised document entered into the land registry. The notary (MarkMyAI) may eventually close their office — but the entry in the land registry (Polygon) remains. Anyone can look it up on polygonscan.com with the transaction hash from the Proof PDF. No MarkMyAI account required. No server call required. Forever.
✅ Permanent and independent
Cannot be altered or deleted. Exists independently of MarkMyAI. Survives even if the service shuts down entirely.
⚡ Paid plans only
Blockchain anchoring is available on Starter and above. Free plan includes the other three layers.
→ Why Polygon: Why Every MarkMyAI Record Is Permanently Anchored on Polygon
What survives what — in practice
This is the key table. For each common distribution scenario, here's which layers remain active and whether proof is still recoverable:
| Scenario | C2PA | Watermark | Fingerprint | Blockchain | Proof? |
|---|---|---|---|---|---|
| Original image, untouched | ✅ | ✅ | ✅ | ✅ | Full |
| Shared via WhatsApp or iMessage | ❌ | ✅ | ✅ | ✅ | Yes |
| Published via WordPress (thumbnail) | ❌ | ✅ | ✅ | ✅ | Yes |
| Uploaded to Instagram | ❌ | ✅ | ✅ | ✅ | Yes |
| Saved as screenshot | ❌ | ❌ | ~ | ✅ | Partial |
| Heavy crop + aggressive JPEG | ❌ | ❌ | ~ | ✅ | Partial |
| MarkMyAI servers offline | ✅ | ✅ | ❌ | ✅ | Yes |
Fingerprint (~) = fuzzy visual matching may still find a close enough match, depending on how much the image has been transformed. Results vary by image content and transformation parameters.
The three questions you must be able to answer
Forget the technology for a moment. A compliance-ready posture means you can answer these three questions about any AI-generated image you've published:
Did you mark it?
Was a machine-readable publisher signature added before or at the time of publication? Not a caption — a verifiable, technical marker that identifies your organization as the deployer.
Does it survive your pipeline?
After your CMS, your CDN, and your social media channels have processed it — is there still at least one layer of verifiable proof attached to or recoverable from the image? This is where most publishers currently fail.
Can you prove it?
If a regulator, a journalist, or a court asks you in two years: "Is this image AI-generated, and did you publish it?" — can you produce documented, independently verifiable evidence? A Proof PDF with a blockchain transaction hash is exactly that.
What to do now
The August 2, 2026 deadline gives you roughly five months to build a compliant workflow. Here's the simplest path:
For developers and automated pipelines: Use the MarkMyAI REST API — one call marks an image with all four layers.
For WordPress sites: Install the MarkMyAI WordPress Plugin — it marks every uploaded image automatically, with no manual steps required.
For teams without developers: Use the web dashboard — drag and drop, download the marked image and Proof PDF.
For verifying existing images: Upload any image at markmyai.com/check — free, no account required. The checker runs all four detection layers and returns a clear result.
Pilot programme
We offer free pilot access for organizations that want to test MarkMyAI in their production workflow before committing. Three months on the Business plan, no cost. Reach out at dominic@tschansolutions.com.
Go deeper
EU AI Act Article 50: The Full Legal Breakdown
Deep dive into what Article 50 says, what it delegates, and what enforcement looks like from August 2026.
Why Your AI Image Is Not Compliant — Even with C2PA
The AI tool's C2PA is not your publisher signature. Here's the exact difference and why it matters legally.
C2PA vs. Watermark vs. Blockchain: Which Layer Does What?
A technical side-by-side: what each of the four protection layers proves and where each one breaks.
We Sent an AI Image via WhatsApp. Here's What Happened.
Real experiment: full C2PA embedded, zero metadata survived. Platform-by-platform results.
AI Image Provenance: The Complete Guide for Publishers
The full technical deep-dive: every layer, every failure mode, and what a compliant workflow looks like.