AI Act Compliance Deadlines: What Applies Now, What’s Delayed, and What to Plan For
Two AI Act compliance deadlines have already passed. A third lands on 2 August 2026. And a legislative proposal currently in trilogue negotiations is about to push the biggest one back by 16 months.
If you’re a compliance officer trying to build a project timeline, you’re working with a moving target. The AI Act itself has fixed dates. The Digital Omnibus on AI, proposed by the Commission in November 2025, would change several of them. Both the European Parliament and the Council have adopted their negotiating positions, and a political agreement is expected at the 28 April 2026 trilogue. If that holds, the amended AI Act could be published in the Official Journal by July 2026.
That’s a lot of “ifs” for a regulation that carries fines up to €35 million or 7% of global turnover.
This article lays out every AI Act compliance deadline as it currently stands, flags what the Omnibus would change, and tells you what to plan for regardless of which timeline wins.
What’s already enforceable
Two sets of obligations are live. No delay is being discussed for either.
Prohibited practices (since 2 February 2025). Article 5 bans are in force across all 27 member states. Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), manipulation of vulnerable groups, exploitation through subliminal techniques, and emotion recognition in workplaces and educational institutions. If your organisation operates any of these, you are already in violation. Penalties: up to €35 million or 7% of global turnover. The fact that most member states haven’t fully set up their enforcement bodies does not reduce your legal exposure. It delays the probability of investigation, not the liability itself.
General-purpose AI model obligations (since 2 August 2025). Providers of GPAI models must comply with transparency requirements, maintain technical documentation, publish training data summaries, and respect EU copyright law. Models classified as presenting systemic risk (generally those trained with more than 10²⁵ FLOPs) face additional obligations: model evaluations, adversarial testing, incident reporting, and cybersecurity protections. The GPAI Code of Practice is available. GPAI models placed on the market before August 2025 have until 2 August 2027 to comply. New models must comply immediately.
AI literacy (since 2 February 2025, but changing). Article 4 currently requires providers and deployers to ensure sufficient AI literacy among their staff. The Omnibus would soften this. The Commission and Council want to shift it from a binding obligation on individual companies to a framework led by Member States and the Commission. Parliament wants to keep it mandatory but lower the bar from “ensuring” to “supporting” sufficient literacy. Whichever version survives trilogue, the original wording is technically enforceable today. If your organisation has done nothing on AI training for staff handling AI systems, that’s a gap worth closing regardless of the final text.
The August 2026 question: legally live, practically delayed
On paper, 2 August 2026 activates the bulk of the AI Act. High-risk AI system obligations under Annex III covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. Transparency obligations under Article 50. Deployer obligations under Article 26. Sandbox requirements. The full penalty framework for high-risk non-compliance.
In practice, three things went wrong.
CEN and CENELEC, the European standardisation bodies tasked with developing technical standards for high-risk AI, missed their 2025 deadline. They’re now targeting end of 2026. Without harmonised standards, companies lack the compliance benchmarks they need to demonstrate conformity. Standards are voluntary under the AI Act, but they provide the presumption of conformity that makes the difference between a straightforward audit and a regulatory argument.
The Commission missed its own 2 February 2026 deadline for publishing guidelines on high-risk classification under Article 6. Those guidelines were supposed to include a comprehensive list of practical examples distinguishing high-risk from non-high-risk systems. As of April 2026, they remain unpublished. Companies are classifying their AI systems against Annex III categories without the official worked examples the regulation promised them.
And only 8 of the EU’s 27 member states had designated their single points of contact for AI Act enforcement by March 2026. The deadline for those designations was August 2025. Seven months overdue. Finland completed its full implementation in December 2025. Germany designated the Bundesnetzagentur. France assigned the CNIL. The remaining 19 countries are at various stages of “working on it.”
The result: the regulation says August 2026, but the infrastructure to comply with it and enforce it does not exist yet. Expecting companies to meet a deadline that the Commission, standardisation bodies, and most member states have collectively missed themselves requires a certain institutional confidence that Brussels has never lacked.
What the Digital Omnibus changes
The Digital Omnibus on AI (COM(2025) 836) proposes targeted amendments to address these implementation gaps. Both co-legislators have adopted their positions. Trilogue negotiations started on 26 March 2026.
The delay everyone is watching: both the Parliament and Council have rejected the Commission’s original conditional mechanism (which tied application dates to standards availability, potentially creating an open-ended postponement) and replaced it with fixed deadlines. This is significant. It means the delay has a defined end date, not a vague “when ready” trigger.
Annex III high-risk systems (standalone systems in biometrics, employment, credit scoring, education, law enforcement, etc.): delayed from 2 August 2026 to 2 December 2027. Both co-legislators agree on this date.
Annex I high-risk systems (AI embedded in regulated products like medical devices, machinery, toys): delayed from 2 August 2027 to 2 August 2028. Again, both co-legislators align.
Synthetic content marking (watermarking for AI-generated audio, image, video, and text under Article 50(2)): 2 November 2026. This is a new compliance milestone added by the Omnibus. Providers of AI systems generating synthetic content must ensure that outputs are machine-detectable as AI-generated.
New prohibited practice: both Parliament and Council propose banning AI systems capable of generating realistic non-consensual intimate imagery of identifiable individuals without adequate safeguards. This would be added to Article 5, carrying Tier 1 penalties.
The Omnibus does not change prohibited practices dates, GPAI obligations, penalty ceilings, or the fundamental architecture of high-risk requirements. It delays when those high-risk requirements become enforceable. It does not weaken what they require.
Not sure if the AI Act applies to your organisation? Take our free 3-minute assessment.
Every AI Act compliance deadline in one place
Already in force: 2 February 2025. Prohibited practices. AI literacy. AI system definition. 2 August 2025. GPAI model obligations. Governance structures. Penalty framework. Member state authority designations (deadline largely missed).
If Omnibus is NOT adopted (original AI Act dates): 2 August 2026. High-risk Annex III obligations. Transparency (Article 50). Deployer obligations. Sandboxes. Competent authority enforcement powers. 2 August 2027. High-risk Annex I obligations (product safety AI). GPAI legacy models must comply.
If Omnibus IS adopted (expected scenario): 2 August 2026. Sandboxes. Competent authority enforcement powers. General penalty rules. 2 November 2026. Synthetic content marking obligations. 2 December 2027. High-risk Annex III obligations (standalone high-risk systems). 2 August 2028. High-risk Annex I obligations (product safety AI).
The Omnibus is expected to be adopted before August 2026. But “expected” is not “enacted.” Until the amended text is published in the Official Journal, the original dates remain legally binding. A&O Shearman’s trilogue analysis estimates endorsement by Parliament and Council in May and June 2026, with Official Journal publication possible in July.
The enforcement gap nobody wants to discuss
Even when dates arrive, enforcement depends on national infrastructure that doesn’t uniformly exist. Finland became the first member state with full AI Act enforcement powers on 1 January 2026. Germany and France have designated multiple authorities with sector-specific mandates. Italy has made progress.
The other member states with no designated enforcement authority are in a regulatory grey zone. A company deploying the same high-risk system in Helsinki and in a capital where no AI authority exists faces identical legal liability but vastly different investigation risk. Over time, this gap will close. But for the next 12 to 18 months, the practical enforcement landscape across Europe will be patchy.
This is not a reason to delay compliance. It’s a reason to prioritise it in the jurisdictions where enforcement infrastructure is furthest along, particularly if you operate in Germany, France, Finland, or the Netherlands. The first enforcement actions will set precedent for everyone.
What your organisation should do now
Plan against August 2026, budget against December 2027. The Omnibus delay for Annex III is very likely to survive trilogue. But compliance planning against a deadline that doesn’t legally exist yet is a risk you don’t need. Start your AI system inventory and Annex III classification now. Begin the conformity assessment process. If the delay materialises, you’ve bought 16 months of breathing room to refine. If it doesn’t, you’re not scrambling in July.
Audit prohibited practices today. These fines are live. Emotion recognition in employee monitoring, manipulative dark patterns, AI-based social scoring, subliminal manipulation. Conduct an Article 5 audit across every AI system your organisation uses or deploys. Include vendor-supplied tools. This is not a future obligation.
Track the 28 April trilogue. A political agreement at the next meeting would lock in the December 2027 date for Annex III and the August 2028 date for Annex I. We’ll cover the outcome in the weekly briefing. See all EU regulatory deadlines in our compliance tracker.
Budget for conformity assessment. Whether the deadline is August 2026 or December 2027, the assessment process takes 6 to 12 months for a well-prepared organisation. For providers, that means a quality management system, technical documentation, risk management framework, and registration in the EU database. For deployers, it means fundamental rights impact assessments and human oversight procedures. Start scoping the work now so you can resource it when the final timeline crystallises.
Don’t assume the delay means less work. The Omnibus changes when obligations become enforceable. It does not change what those obligations require. Every requirement in Chapter III of the AI Act survives intact. The conformity assessment process is identical whether you complete it in 2026 or 2027. Risk management systems, data governance, technical documentation, human oversight, accuracy and robustness requirements. All unchanged. The only difference is how much time you have to get it right.
RegDossier
Making EU compliance almost enjoyable. Almost.
Get the Tuesday briefingFree. One email per week. Unsubscribe anytime.