Hero image is MarkMyAI-verified
ComplianceMarch 16, 2026 · 6 min read

WordPress and the EU AI Act: What Publishers Must Do Before August 2026

WordPress powers 43% of the web. A large share of those sites now publish AI-generated images — blog illustrations, product visuals, social thumbnails. Most of them are not EU AI Act compliant. Not because their owners are careless, but because WordPress works in a way that quietly breaks every embedded proof signal the moment an image goes live.

The obligation in plain language

Article 50 of the EU AI Act requires that AI-generated images remain machine-readably marked and verifiable after publication. Not just labeled with a caption or a tooltip — technically marked, in a way that software can detect. The obligation falls on whoever deploys the image to end users. As a publisher, that is you.

The deadline is August 2, 2026. After that, national market surveillance authorities across all 27 EU member states can investigate and impose fines up to €15 million or 3% of global annual turnover.

The law does not specify a particular technology. It requires that the marking be detectable by automated tools. In practice, this means an embedded cryptographic signature (like C2PA), an invisible watermark, or — ideally — both, backed by a verifiable proof record.

Who is affected?

Any person or organization that publishes AI-generated images to audiences in the EU — regardless of where the publisher is based. This includes bloggers, news publishers, e-commerce sites, agencies, and SaaS companies. The rule applies when the AI system is used to generate content for the purpose of informing, entertaining, or influencing natural persons. That covers almost every blog post illustration.

Why WordPress creates a specific compliance problem

WordPress is excellent at managing and delivering images. It is, however, not designed to preserve embedded metadata through its delivery pipeline. When you upload an image, WordPress immediately generates several resized variants — thumbnail, medium, large, full — and encodes each one as a new JPEG. This process is intentional and useful for performance. It is also destructive for provenance.

Here is the sequence that breaks compliance:

1

You upload an AI-generated image. It may already have a C2PA signature from ChatGPT or Midjourney.

2

WordPress re-encodes the image into multiple sizes. Each re-encode strips all embedded metadata, including C2PA.

3

You insert the image in a post. WordPress (or your page builder) picks the best-fit size based on the visitor's screen.

4

Your visitor receives a thumbnail — a fresh JPEG with no signature, no watermark, no provenance signal of any kind.

The original file in your Media Library stays intact. But your visitors never see that file. They see the thumbnail WordPress generated from it — a file that contains no proof of its AI origin.

The misconception that leaves most publishers exposed

Many publishers believe they are covered because their AI tool already adds a C2PA signature. ChatGPT adds one. Adobe Firefly adds one. This is not enough — for two reasons.

First, that signature belongs to the AI provider, not to you as the publisher. The tool signature proves where the image was generated. It does not prove you published it — and Article 50 places the transparency obligation explicitly on whoever deploys the output to end users.

A note on legal interpretation: Article 50(4) of Regulation (EU) 2024/1689 requires that AI-generated synthetic content be marked in a machine-readable format detectable by automated tools. Whether that marking must originate from the deployer specifically, or whether preserving the AI tool's original signature would suffice, is a matter of ongoing legal interpretation — the text does not use the term "publisher signature." Our reading, and the more conservative approach, is that the deployer bears responsibility for ensuring a valid, detectable marking reaches the end user. If the tool's C2PA is stripped before delivery, the obligation is not met regardless of what was in the original file.

Second, even if the tool signature were sufficient, WordPress would strip it in the re-encoding step. The signature embedded by ChatGPT does not survive a 1024px JPEG resize.

How we tested this — and what we found

We ran C2PA signatures through twelve common image transformations: JPEG re-compression at various quality levels, resize to 300px, 768px, and 1024px, CDN optimization via Cloudinary, social media re-upload, and WordPress thumbnail generation. C2PA survived zero of twelve. Not one transformation left the signature intact.

Invisible watermarks (using TrustMark) performed very differently: in the same real-world pipeline tests, the watermark survived 91% of transformations across 366 decode attempts — including WordPress thumbnails down to 300px width.

Read the full benchmark results →

This does not mean C2PA is useless. It means it cannot be the only layer you rely on — particularly on a CMS that re-encodes images on upload. C2PA on the original file is still valuable: it proves the origin at the point of creation. The challenge is ensuring something verifiable also reaches your visitors.

What a robust solution needs to do

Given the constraints of the WordPress delivery pipeline, a compliance solution needs to satisfy three requirements independently of whether C2PA survives to the visitor:

Publisher-signed proof at creation: The C2PA certificate must name you — not just the AI tool. It must be embedded in the original file at the moment of marking.

A signal that survives re-encoding: An invisible watermark or pixel-level token that remains decodable after JPEG compression and thumbnail resizing. This is what reaches your visitors.

Verifiable from the delivered image: A proof record that can be matched from a downloaded thumbnail or screenshot — independent of whether the file still carries embedded metadata.

A blockchain anchor is a fourth layer worth having on top of these three: an immutable timestamp that exists independently of any server, file, or service. It does not help detection, but it is the strongest possible evidence of when a proof record was created.

Checklist: what compliance looks like in practice

Publisher-signed C2PA: Your name and organisation in the certificate — not just the AI tool's
Invisible watermark: A pixel-level signal that survives JPEG compression and resizing
Proof record: A server-side audit log tied to a specific mark ID — detectable even from screenshots
Blockchain anchor: An immutable timestamp that exists independently of your server

How this works in WordPress

The MarkMyAI WordPress Plugin implements the approach above. When you mark an image in the Media Library, the plugin sends it to our API, which embeds an invisible watermark into the pixel data, signs the result with a publisher C2PA certificate in your name, and stores a proof record. The marked file replaces the original in WordPress. Every thumbnail WordPress generates from that point forward derives from the marked source — so the watermark is present in every size.

For publishers who also need C2PA to survive to the visitor — for example to pass detection by automated checkers — the plugin includes a setting called Preserve Proof. When enabled, WordPress serves the full-size marked original for every marked image instead of a resized thumbnail. This works with standard WordPress themes and all major page builders.

For a detailed walkthrough of the complete pipeline — from upload to verified proof — see the Plugin User Guide.

A practical note on enforcement timing

August 2026 is not a hard cutoff after which every non-compliant publisher immediately faces a fine. Enforcement begins then, and authorities will prioritize. In practice, early enforcement is likely to focus on large publishers, news organizations, and cases where non-compliance causes measurable harm — deepfakes, election interference, consumer deception.

This does not mean smaller publishers should wait. The compliance gap is a reputational risk even before fines become relevant. If a reader can verify that your images are properly marked and attributed, that is a trust signal. If they cannot — or if verification fails because a thumbnail stripped every signal — that is a liability.

The practical window to get this right is the next few months. The technical work is straightforward. The harder part is understanding that the obligation applies not just at the moment of creation, but at the moment of delivery — and that WordPress, by default, delivers a different file than the one you originally marked.

Getting started

The MarkMyAI WordPress Plugin is free to install. A free API key gives you 20 marks per month — enough to test the full pipeline on a live site before committing to a plan.

Publishing at scale?

If you manage a larger editorial team, a news site, or an agency with multiple WordPress installations, the compliance picture gets more complex — volume, consistent workflows, audit documentation. We are happy to talk through what that looks like in practice.

Get in touch →
Analytics Consent

We use Google Analytics 4 only if you agree, to understand which pages bring traffic and where visitors drop off. No advertising features are enabled. You can change your choice at any time in the privacy settings.