Hero image is MarkMyAI-verified
EU AI ActFebruary 10, 2026 · 6 min read

Why Your AI Image Is Not Compliant — Even If It Has C2PA

"Our images already have C2PA metadata from DALL·E — we're covered, right?" We hear this constantly. It's wrong. Here's exactly why, and what the EU AI Act actually requires from you as a publisher.

The Core Misconception

C2PA metadata from an AI tool (OpenAI, Adobe Firefly, Stability AI) represents the tool's claim about what it created. It does not represent your claim as the publisher about what you are distributing. Under EU AI Act Article 50, the deployer — that's you — must add their own signature. The tool's signature alone is legally insufficient.

What C2PA actually is — and what it isn't

Honest admission: when we started building MarkMyAI, we had this wrong too. We saw C2PA metadata in a DALL·E image, verified it on contentcredentials.org, and assumed that was the whole story. It took reading the actual legal text of Article 50 — specifically paragraphs (2) and (4) side by side — to realize they describe obligations on two completely different parties.

C2PA (Coalition for Content Provenance and Authenticity) is an open standard for embedding cryptographically signed provenance metadata into media files. A C2PA manifest is stored inside the file itself (in a JUMBF box for JPEG, or XMP for PNG) and contains a chain of claims about who created, modified, or published the content.

When DALL·E generates an image and embeds C2PA metadata, that manifest contains one claim, signed by OpenAI's certificate. It says, in effect:

C2PA manifest — DALL·E generated image

claim_generator: "DALL-E/3.0"

signer: "OpenAI, Inc."

assertion: c2pa.ai_generative_training: false

action: c2pa.created (AI-generated)

publisher: — (not present)

distribution_date: — (not present)

publishing_organization: — (not present)

purpose: — (not present)

The manifest tells us OpenAI created the image. It tells us nothing about who is publishing it, when, where, for what purpose — and crucially, it carries no signature from you. The chain of accountability stops at OpenAI.

Two separate obligations — two separate actors

Article 50 of the EU AI Act creates obligations at two distinct levels of the content chain:

ActorDefinitionObligation (Art. 50)
ProviderThe company building and operating the AI system (OpenAI, Adobe, Stability AI)Art. 50(2): Mark outputs in machine-readable format, detectable as AI-generated
DeployerThe company or person using the AI system to create content for public use (that's you)Art. 50(4): Disclose that content was artificially generated/manipulated — with your signature

OpenAI fulfills their provider obligation by embedding C2PA. That satisfies Article 50(2) for OpenAI. It does nothing for your Article 50(4) deployer obligation. These are separate legal requirements on separate legal entities.

The trust chain analogy

Think of it like publishing a book. The printing press manufacturer might certify the paper quality and ink composition. But the publisher still needs their own ISBN, their own copyright notice, their own legal deposit. The printer's certification doesn't substitute for the publisher's disclosure.

C2PA works the same way. It's designed as a chain of provenance — each actor in the content lifecycle can add their own signed claim, building on the previous ones. When you use DALL·E to create a marketing image and publish it on your website, the complete chain should look like this:

1

OpenAI (Provider)

Created by DALL·E 3, AI-generated, not for training

✅ Present in DALL·E images

2

Your Company (Deployer)

Published by Acme Corp, purpose: marketing, date: 2026-03-06

❌ Missing unless you add it

What does "disclosure" actually require?

Article 50(4) requires that deployers of AI image systems "disclose that the content has been artificially generated or manipulated." The December 2025 Draft Code of Practice specifies what "disclosure" entails technically:

🔐

Machine-readable: a signed provenance claim

A C2PA manifest entry signed by your certificate — not the AI tool's certificate — identifying you as the publisher. This is what allows automated compliance verification by regulators.

👁

Human-readable: a visible label

A disclosure in the publication context — a caption, an icon, alt text — identifying the content as AI-generated. The draft Code of Practice specifies a common EU taxonomy (fully AI-generated vs. AI-assisted) and proposes a common icon.

🗄️

Auditable: a retrievable evidence trail

You must be able to demonstrate, retroactively, that a piece of content was disclosed. This requires a logged audit trail — not just an embedded manifest that gets stripped when the image is shared.

The practical test: what does contentcredentials.org show?

The fastest way to understand the gap is to look at what a compliance tool actually shows. Take a DALL·E image and upload it to contentcredentials.org. You'll get a green checkmark and feel good about it. Then ask yourself: whose name is on that checkmark? Not yours.

✅ C2PA Manifest Found

Signer: OpenAI

Generator: DALL·E 3

Action: c2pa.created (AI)

Publisher:

Your org:

Publish date:

⚠️ EU AI Act Status

AI-generated: Proven ✓

Art. 50(2) (Provider): Fulfilled ✓

Art. 50(4) (Deployer): Not fulfilled ✗

Your disclosure: Missing ✗

Audit trail: Missing ✗

The green checkmark on contentcredentials.org means the AI tool did its job. It says nothing about whether you did yours. A regulator inspecting your content would find the AI tool's signature but no publisher signature, no disclosure, and no audit record linking the published image to an accountable organization.

One more thing: the AI tool's C2PA probably won't survive sharing

Even if the argument "we have C2PA from DALL·E" were legally sufficient (it isn't), it fails practically the moment the image is shared. WhatsApp, Instagram, X, and Facebook all strip image metadata — including C2PA manifests — during upload and re-encoding.

We tested this. A DALL·E image with full C2PA intact → sent via WhatsApp → received by a colleague → uploaded to our checker: zero metadata. No C2PA. No XMP. Nothing. The "we have C2PA" defense evaporates in seconds on any major social platform.

This is why the Draft Code of Practice requires a fingerprint/logging fallback as a third layer: it acknowledges that metadata cannot be relied on as the sole mechanism for compliance evidence.

What publisher-compliant C2PA looks like

A publisher-compliant C2PA manifest, signed by MarkMyAI on your behalf, adds a second claim to the chain — building on the AI tool's claim:

C2PA manifest — after MarkMyAI publisher signing

// Claim 1 (from AI tool, unchanged)

claim_generator: "DALL-E/3.0"

signer: "OpenAI, Inc."

action: c2pa.created (AI-generated)

// Claim 2 (added by MarkMyAI — publisher layer)

claim_generator: "MarkMyAI/1.0"

signer: "Acme Corp (via MarkMyAI)"

action: c2pa.published

softwareAgent: "MarkMyAI/1.0"

when: "2026-03-06T10:30:00Z"

purpose: "marketing"

verify_url: "https://markmyai.com/api/verify/mk_..."

Now the chain is complete. The AI tool's claim proves generation. Your publisher claim proves deployment. And because MarkMyAI also creates a database fingerprint and audit log entry, the verify URL works even after the embedded metadata is stripped.

Summary: the three questions to ask

Does the image have C2PA from the AI tool?

Probably yes, if you use DALL·E, Adobe Firefly, or Stability AI — but this only covers Art. 50(2) for the provider.

Does the image have a publisher-signed C2PA claim from your organization?

Only if you've run it through a publisher signing service like MarkMyAI. Without this, Art. 50(4) is not fulfilled.

Is there a permanent audit trail that survives metadata stripping?

Only if you have a database-backed fingerprint and verify URL. Without this, your compliance evidence disappears the moment the image is shared via WhatsApp.

Check your images right now

Upload any AI-generated image to our free checker. We'll show you exactly what C2PA is present, who signed it, and whether the publisher layer is there.

Analytics Consent

We use Google Analytics 4 only if you agree, to understand which pages bring traffic and where visitors drop off. No advertising features are enabled. You can change your choice at any time in the privacy settings.