AI Act High-Risk Deadline: August 2026, December 2027, or Somewhere in Between

On Tuesday, the European Parliament’s IMCO and LIBE committees voted 101-9-8 to push the AI Act’s high-risk system obligations from August 2026 to December 2027. That vote is the strongest signal yet that the original deadline won’t hold. It also doesn’t change the law. Not yet.

Until the Digital Omnibus is formally adopted, 2 August 2026 remains the enforceable date. Companies that treat the delay as confirmed are gambling on a legislative process that still needs a plenary vote (scheduled 26 March), trilogue negotiations with the Council, and final adoption. Optimistic estimates put that at mid-2026. If trilogues drag past August, the original deadline applies by default.

This is the worst possible situation for compliance planning. Two dates, both plausible, 16 months apart. The practical question is what to do about it.

Why the delay is happening

The technical infrastructure that companies need to comply with doesn’t exist yet.

CEN-CENELEC’s Joint Technical Committee 21, the body responsible for drafting harmonised standards for the AI Act, missed its original April 2025 deadline. In October 2025, both standardisation bodies adopted emergency measures to accelerate delivery, including skipping the formal vote stage for some drafts and fast-tracking six delayed standards. The target is now Q4 2026.

That means the standards companies need for conformity assessment won’t be published and referenced in the Official Journal until late 2026 at the earliest. Requiring companies to comply with technical requirements by August 2026 when the standards defining those requirements arrive months later is, to put it plainly, absurd.

The Commission knows this. It missed its own 2 February 2026 deadline for publishing guidance on Article 6, the provision that determines whether an AI system counts as high-risk. During a January European Parliament hearing, the Commission’s Deputy Director-General for Communications Networks said the guidance is needed to provide legal certainty and that the delay is deliberate: standards and guidelines need more time.

The Digital Omnibus, proposed in November 2025, formalises this reality. The Commission’s version ties high-risk obligations to a future decision confirming that harmonised standards are available, with a backstop of December 2027 for Annex III systems. Parliament’s version, adopted on Tuesday, simplifies this to a fixed date: 2 December 2027. The Council is working on its own text. The final date will emerge from negotiations between all three institutions.

Which companies are affected

The AI Act’s high-risk rules under Annex III cover eight domains. If your company develops or deploys AI systems in any of these areas, the deadline matters.

Biometrics. Remote biometric identification, emotion recognition in workplaces or education, and biometric categorisation systems.

Critical infrastructure. AI used as safety components in management and operation of road traffic, water, gas, heating, and electricity supply.

Education. Systems that determine access to education, evaluate learning outcomes, monitor student behaviour during exams, or assess the appropriate education level for an individual.

Employment. CV-screening tools, interview evaluation systems, task allocation based on individual behaviour, and systems that monitor or evaluate worker performance.

Essential services. Credit scoring, insurance risk assessment for life and health, emergency call triage, and systems that evaluate eligibility for public benefits.

Law enforcement. Risk assessment tools, evidence evaluation systems, recidivism prediction, and profiling tools.

Migration and border control. Risk assessment for travellers, asylum application processing, and document authenticity verification.

Justice and democratic processes. AI systems assisting judicial authorities in researching and interpreting facts and law.

Two things mid-market companies usually miss. The most common exposure is in employment (hiring tools, performance monitoring) and essential services (credit decisions, insurance pricing). You don’t need to be building AI to be caught. If you deploy a third-party AI hiring platform, you’re a “deployer” under the Act, and deployers carry their own obligations.

Second, there’s an exemption. An Annex III system is not high-risk if it performs a narrow procedural task, improves a completed human activity, detects decision-making patterns without replacing human judgment, or performs preparatory work for an assessment. But this exemption never applies if the system profiles individuals. And if you claim the exemption, you must document why and register the system in the EU database anyway.

What to do now

Prepare for August 2026. Budget for December 2027. Hope for the best.

Complete your AI inventory this quarter. Map every AI system your organisation uses or develops. For each one, determine whether it falls within an Annex III domain. This step doesn’t depend on any standard or guideline. It depends on reading Annex III and matching it against your operations. Most companies that haven’t started will find this takes 4-8 weeks for a mid-size organisation.

Classify before you comply. For each system in an Annex III domain, assess whether it qualifies for the exemption under Article 6(3). Document your reasoning. If you conclude a system is not high-risk, that documentation must exist before you place the system on the market or put it into service. Don’t wait for Commission guidance on practical examples. Use Spain’s AESIA guidance as a reference. Spain’s AI supervisory agency published 16 practical guidance documents in late 2025, covering all core high-risk obligations with templates and checklists.

Start your risk management system now. Articles 8-15 require a risk management system that runs throughout the AI system’s lifecycle. This includes data governance measures, technical documentation, automatic logging, human oversight mechanisms, and accuracy and robustness safeguards. None of this requires harmonised standards to begin. It requires internal processes, documentation, and governance structures that take months to build.

Demand conformity documentation from vendors. If you’re deploying third-party AI tools in high-risk domains, your vendor (the “provider”) carries the primary conformity assessment burden. But you cannot transfer your own obligations through a contract. Start asking vendors now: Is this system classified as high-risk under the AI Act? What conformity assessment is planned? When will technical documentation be available? Vendor responses will tell you who is prepared and who is hoping the deadline disappears.

Budget for conformity assessment. For mid-size companies, initial compliance investment for high-risk systems runs €500K-2M, with ongoing annual costs of €200K-500K. Most Annex III systems can use internal conformity assessment (self-assessment under Annex VI), but some, particularly in biometrics and law enforcement, require a notified body. Either way, the process takes 6-12 months. Starting in Q3 2026 won’t work even under the extended timeline.

What happens next

The plenary vote on 26 March will almost certainly confirm the committee position. Trilogue negotiations with the Council will follow, likely starting in April or May. The Council’s own compromise text from January already proposed fixed deadlines, suggesting broad institutional agreement on the delay.

The likeliest outcome: Annex III high-risk obligations apply from December 2027, with the final text adopted mid-2026. But “likeliest” isn’t “certain.” And the work required to comply doesn’t shrink just because the deadline moves.

Companies that use the next six months for classification, inventory, and governance setup will be ready whenever the final date lands. Companies that wait for political certainty will be scrambling regardless of which deadline holds.

The penalty for non-compliance with high-risk requirements: up to €15 million or 3% of global annual turnover, whichever is higher.

Sources: EU AI Act, Regulation (EU) 2024/1689 | European Commission AI Act implementation page | IMCO/LIBE vote, 18 March 2026 | Digital Omnibus proposal, COM(2025) 836

RegDossier

Making EU compliance almost enjoyable. Almost.

Get the Tuesday briefing

Free. One email per week. Unsubscribe anytime.

Similar Posts