Hero image is MarkMyAI-verified
DeadlineMarch 4, 2026 · 7 min read

The EU AI Act Deadline Is in 149 Days. Are You Ready?

August 2, 2026 is when Article 50 of the EU AI Act becomes enforceable. Not "enter into force" — enforceable. National market surveillance authorities are already designated, staffed, and empowered to investigate. Fines reach €15 million or 3% of global annual turnover. Here's what "ready" actually means.

149

days remaining

August 2, 2026

EU AI Act Article 50 — transparency obligations become enforceable

~8 weeks after expected final Code of Practice (June 2026)

What happens on August 2, 2026?

We built MarkMyAI because we ran into this problem ourselves. We run petportraitgift.com — customers upload a photo of their pet, and we generate a stylized watercolor portrait fully automatically using AI. Every single order produces an AI-generated image that goes straight to a customer. At some point we asked ourselves: are these images Article 50 compliant? We were clearly a deployer. Every answer we found was either vague ("add a label somewhere") or focused entirely on the AI provider side. Nobody was addressing the deployer obligation specifically. We had no publisher-signed provenance on any of the images we were sending out. Not one. That problem is why MarkMyAI exists.

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024. It has a phased implementation schedule designed to give organizations time to adapt. The key milestones:

Feb 2025

Prohibited AI practices (Art. 5) — chatbots that manipulate behavior, real-time biometric surveillance, etc. — became enforceable.

Aug 2025

GPAI (General Purpose AI) model obligations began applying. Also: national authorities had to be designated by this date.

Aug 2026

High-risk AI obligations AND Art. 50 transparency obligations become enforceable. This is the big deadline.

Article 50 isn't new as of August 2026 — it's been in the regulation since August 2024. What changes in August 2026 is that it becomes enforceable: national authorities can open investigations, demand documentation, and impose administrative fines for non-compliance found after that date.

Who enforces this — and how?

Enforcement is distributed. There is no single EU AI regulator. Instead, each member state designates national market surveillance authorities(MSAs) responsible for investigating and penalizing violations in their jurisdiction.

The EU AI Office (established within the European Commission) oversees implementation across member states and handles cross-border coordination, especially for general-purpose AI models.

Several countries have already designated their authorities:

CountryDesignated Authority
GermanyBundesnetzagentur (Federal Network Agency) — market surveillance coordinator
EU-levelEU AI Office — cross-border coordination, GPAI models
OthersAll member states required to designate by August 2, 2025

MSAs have substantial investigative powers: they can request documentation, access source code, demand corrective measures, and impose fines. The regulation requires penalties to be "effective, proportionate and dissuasive." SME viability is a consideration in proportionality assessments, but there is no SME exemption from Article 50 itself.

The fine structure

Article 99 of the EU AI Act establishes three tiers of administrative fines. For Article 50 violations (transparency obligations), the applicable tier is:

€15,000,000

or 3% of total global annual turnover — whichever is higher

Art. 99(3): Applicable to violations of Articles 16, 22-26, and related obligations, which include the transparency and disclosure requirements.

For context: a company with €100M annual turnover faces a potential maximum fine of €3M. A company with €1B turnover faces up to €30M. The €15M floor applies to smaller organizations. These are maximums — actual penalties depend on severity, duration, affected population, degree of cooperation, and SME status.

Beyond fines, MSAs can require corrective measures — including ceasing distribution of non-compliant AI-generated content until the issue is remediated. For media organizations and marketing agencies, the reputational and operational impact of a public enforcement action may exceed the fine itself.

Who is in scope?

Article 50(4) applies to "deployers of an AI system that generates or manipulates image, audio or video content." If you use any AI tool to create or significantly modify images that you then publish publicly, you are a deployer in scope.

Common scenarios that are clearly in scope:

Marketing agency creating campaign visuals with DALL·E or Midjourney

News organization using AI to generate or retouch photo illustrations

E-commerce company generating product lifestyle images with AI

HR team creating recruitment materials with AI-generated visuals

Social media manager using AI upscaling or background replacement on product photos

PR agency distributing AI-generated infographics or avatars

The geographic scope: the regulation applies to AI systems placed on the market or put into service in the EU. If you publish content that reaches EU users — regardless of where your company is based — you are subject to these rules.

The 149-day compliance checklist

Here's what "ready" looks like on August 2, 2026. Work backwards from the deadline — you'll need 4–8 weeks for implementation and testing.

Now → 2 weeks

  • Audit: identify all workflows where your organization creates or distributes AI-generated images
  • Categorize: which images are fully AI-generated vs. AI-assisted vs. human-created?
  • Check: upload representative samples to markmyai.com/check — do they have publisher-signed C2PA?
  • Prioritize: focus on publicly distributed images first (social, web, press materials)

2–6 weeks

  • Integrate: connect your image creation workflow to a publisher signing service (MarkMyAI API or equivalent)
  • Store: save the verify_url returned by the signing service alongside every published AI image in your CMS
  • Disclose: add a visible AI disclosure label/caption to AI-generated images in publication templates
  • Document: write down your process — who signs, when, what tool, what purpose — for audit trail purposes

6–10 weeks (before Aug 2)

  • Test end-to-end: mark an image → publish → send via WhatsApp → re-check at /check → verify audit trail survives
  • Train: brief your content team on the new workflow and the exception conditions
  • Document exceptions: if you claim editorial control exceptions for AI text, document the review process
  • Review: read the final Code of Practice when published (~June) and confirm your implementation is aligned

The one thing most organizations are missing

We've spoken with compliance officers, agency leads, and developers who all had some form of AI disclosure in their workflow. The single most common gap wasn't the technical implementation. It was the audit trail.

Organizations often have some form of AI disclosure in their content workflow. What they lack is the ability to demonstrate, in a regulatory investigation, that a specific image that is circulating was marked, when it was marked, by whom, with what intent — and that the marking was in place at the time of first publication.

This is not a bureaucratic concern. It's the difference between "we have a policy" and "we can prove we followed the policy for this specific piece of content." Regulators don't audit policies — they audit specific incidents.

A database-backed audit log with a permanent verify URL per image is the only mechanism that provides this. Embedded metadata alone cannot — because, as we've shown, it doesn't survive the distribution lifecycle.

149 days.

That's how long you have to be ready.

Start with the free compliance check — see where your current images stand. Then set up the publisher signing pipeline that survives the distribution lifecycle.

Analytics Consent

We use Google Analytics 4 only if you agree, to understand which pages bring traffic and where visitors drop off. No advertising features are enabled. You can change your choice at any time in the privacy settings.