EU Code of Practice on AI Content Transparency: What's Actually in the Draft
On December 17, 2025, the European Commission published the first draft of its Code of Practice on transparency of AI-generated content. A second draft is expected around March 2026. We've read the document carefully. Here's what it actually says — and what it deliberately doesn't.
Context
The Code of Practice is not law — it translates Article 50 of the EU AI Act into technical and organizational requirements. The AI Act sets the legal obligation; the Code specifies how to fulfill it. Compliance with the Code creates a presumption of conformity with the law. Companies that don't follow the Code can still comply with Article 50, but they must demonstrate equivalent alternative measures.
The timeline
First draft published by EU Commission. Feedback period opens.
Feedback period closes.
Second draft expected, incorporating feedback.
Final Code of Practice published.
Article 50 transparency obligations become enforceable.
The compressed timeline — from final Code (June) to enforcement (August) — gives organizations roughly 8 weeks to implement changes after the final requirements are known. Eight weeks. That's not a comfortable runway for anything that involves procurement, legal sign-off, and a technical integration. Organizations that wait for the final Code before acting will not have enough time. The first draft is directionally stable enough to act on now.
The core architecture: layered marking
The most important structural decision in the draft is the explicit adoption of a layered approach. The document explicitly states that no single technique is sufficient, given that each technique has specific limitations and failure modes. The layers are:
Provenance metadata (C2PA)
Mandatory for AI providers
Digitally signed origin information embedded in or linked to the content. Must be integrity-verifiable. The draft specifically references C2PA as the relevant standard. For images: embedded in the file container. For cases where embedding is impossible: a remote/hosted manifest (C2PA's 'soft binding' mechanism).
Limitation: Gets stripped by social media and messaging platforms during re-encoding.
Imperceptible watermarking
Mandatory for AI providers (where technically feasible)
A signal woven into the pixel data of the image itself, not into metadata. The draft requires it to be robust to 'common transformations' including compression, resizing, and format conversion. It must survive the distribution channels where metadata is reliably stripped.
Limitation: No standardized format yet. The draft doesn't specify a watermarking algorithm, leaving providers to choose. Interoperability (can detector X read watermarks from tool Y?) is an open problem.
Fingerprinting / server-side logging
Required as fallback mechanism
For images: a perceptual hash stored in the provider's database. For text: logging of generated content. The purpose is to allow provenance verification even after metadata has been stripped and watermarks potentially degraded. Detection via this layer requires querying the provider's database.
Limitation: Requires the provider's database to be queried. No standardized API for this yet — though the draft calls for detection-as-a-service interfaces.
Provenance certificate (hosted manifest)
For content where in-file embedding is not possible
A hosted document that can be linked to from the publication context. This is essentially C2PA's remote/soft binding mechanism — the manifest lives on a server rather than in the file. Accessible even after metadata stripping if the link to the certificate is preserved.
Limitation: Depends on the hosting infrastructure remaining available. Link rot is a long-term concern.
Detection as a service: an underreported requirement
One of the more operationally significant requirements in the draft is what it calls "detectability as a service." Providers aren't just required to mark their outputs — they must also provide a way for third parties to verify whether content was generated by their system.
The draft specifies that this should be provided:
- •Free of charge to users and third parties
- •Via a web interface or API
- •With confidence scores indicating the probability that content was AI-generated or manipulated
- •Without revealing proprietary model architecture information
In practice, this means OpenAI, Adobe, Stability AI, and others will need to expose detection endpoints. The draft doesn't yet standardize the API format — this is one of the open questions to be addressed in the second draft.
What this means for MarkMyAI
Our /v1/detect endpoint is exactly this: a detection API that checks whether an image was marked by a publisher using our system, with matched fingerprint and confidence information. We built it before the draft was published, simply because it seemed like the obvious thing to have. Turns out it's a core requirement. Sometimes you get lucky.
The taxonomy: "fully AI-generated" vs. "AI-assisted"
For the deployer side, the draft proposes a common taxonomy for AI content. This is politically and practically important because it defines when the disclosure obligation applies:
| Category | Definition | Disclosure required? |
|---|---|---|
| Fully AI-generated | Content generated entirely by an AI system without substantial human creative input | Yes, always |
| AI-assisted | Human-created content where AI played a supporting role (background removal, upscaling, color grading) | Context-dependent |
The "AI-assisted" category is where most practical disputes will arise. A marketer who generates a photorealistic product image with DALL·E and then touches it up slightly in Photoshop — is that "fully AI-generated" or "AI-assisted"? The draft suggests that the primary generation method determines the category. If the starting point was AI generation, the disclosure applies.
The draft also proposes a common EU icon for AI content disclosure. This icon is described as temporary ("pending EU-wide standardization") — the second draft is expected to bring more specificity on the icon and its mandatory usage contexts.
The exceptions — and their governance requirements
The draft Code of Practice mirrors the exceptions in Article 50(4) but adds important governance requirements for organizations that want to claim them:
Artistic and satirical works
Art. 50(4): disclosure 'in an appropriate manner that does not hamper the display or enjoyment of the work'
You must document that the work is artistic/satirical in nature. The exception doesn't mean no disclosure — it means the disclosure method can be adapted (e.g. in the credits, description, or contextual caption rather than overlaid on the work itself).
Editorial control / human review
Art. 50(4): AI-generated text on matters of public interest doesn't require disclosure if 'it has undergone a process of human review' and 'a natural or legal person holds editorial responsibility'
You need a demonstrable, logged workflow: who reviewed, when, what their role is. 'Human review' cannot be a checkbox — it must be a documented process. Note: this exception applies only to text, not to images.
The key message from the draft: exceptions require governance. You cannot rely on an exception without a process that makes the exception demonstrable to a regulator. Organizations that treat exceptions as a way to avoid disclosure entirely, without documentation, are building on sand.
What the draft deliberately leaves open
The first draft is a direction-setting document, not a finished technical standard. Several critical questions are explicitly deferred to the second draft or to subsequent implementing measures:
No specific watermarking algorithm is mandated — providers choose their own, interoperability TBD
No standardized detection API format — what the /detect endpoint looks like is not specified
No final icon design — the EU AI disclosure icon is 'pending standardization'
No quantified robustness thresholds — how robust a watermark must be is described qualitatively, not with measurable targets
No defined confidence score format for detection responses
No specific rules for audio and video, which face different technical constraints than images
This openness is not accidental. The Commission is using the Code of Practice process to gather technical expertise from industry before standardizing — which is the right approach, even if it's frustrating from an implementation standpoint. The risk for organizations: acting on first-draft assumptions that change in the second or final draft. The counter-risk: waiting for certainty and running out of time before August 2. Neither is comfortable. We've chosen to build on what's stable and leave the rest modular.
What organizations can safely act on now
Despite the open questions, the first draft is specific enough to drive meaningful preparation:
Adopt C2PA (Layer 1) for all AI-generated images. C2PA is explicitly referenced in the draft and won't be replaced.
Implement a publisher-signed manifest (deployer layer) via a service like MarkMyAI. This is required regardless of how the taxonomy finalizes.
Implement a database fingerprint + audit trail as fallback. The draft's requirement for fingerprinting/logging as Layer 3 is stable.
Build an internal taxonomy distinguishing fully AI-generated from AI-assisted content and document it.
Prepare a visible disclosure process — label, caption, or icon — even before the final EU icon is standardized.
Waiting for the final Code before doing anything. There are ~8 weeks between expected publication (June) and enforcement (August 2).
Assuming that one technique (e.g. watermarking alone) will be sufficient when the final Code arrives.
Start with what's stable: publisher-signed C2PA + audit trail
These two requirements will survive any revisions to the Code of Practice. Start here — it's what MarkMyAI provides.