High-Risk AI Systems Under the EU AI Act: Full List with Examples
Most companies using AI won’t fall under the strictest tier of the EU AI Act. But the ones that do face conformity assessments, mandatory risk management systems, human oversight requirements, and penalties reaching €15 million or 3% of global turnover. The dividing line is Annex III: eight categories of high-risk AI systems that cover far more ground than most compliance teams expect.
If your organisation deploys AI in hiring, credit scoring, insurance pricing, law enforcement support, or critical infrastructure management, you are almost certainly in scope. The problem is that many of these systems were purchased off the shelf, labelled as “automation tools,” and never flagged as AI at all.
This article walks through every Annex III category with concrete examples, so you can check your own inventory against the actual regulation instead of guessing.
What counts as a high-risk AI system under the AI Act?
A high-risk AI system is any system listed in Annex III of the AI Act or embedded in a product already covered by EU product safety legislation (Annex I). Annex III organises high-risk systems into eight areas based on their potential impact on health, safety, and fundamental rights. If your system fits any of these categories, the full set of Chapter III obligations applies: risk management, data governance, transparency, human oversight, accuracy requirements, and conformity assessment.
The classification is based on purpose, not technology. A simple decision-tree model used for credit scoring carries the same obligations as a deep learning system doing the same job.
The eight Annex III categories, with examples you’ll recognise
1. Biometrics (where permitted under EU or national law). Remote biometric identification systems, biometric categorisation based on sensitive attributes, and emotion recognition. Note the carve-out: simple biometric verification (confirming you are who you claim to be, such as unlocking a phone or badge-swiping into a building) is explicitly excluded. The high-risk designation targets identification and categorisation of people, not one-to-one verification.
2. Critical infrastructure. AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity. The key phrase is “safety components.” Your internal analytics dashboard monitoring energy consumption is probably fine. An AI system that actively controls power distribution or manages traffic signals, where failure could endanger lives, is in scope.
3. Education and vocational training. Systems that determine access or admission to educational institutions, evaluate learning outcomes, assess the appropriate level of education for an individual, or monitor prohibited behaviour during tests. AI proctoring software falls squarely here. So does any system that decides who gets admitted to a programme or steers the learning process based on AI-evaluated outcomes.
4. Employment, worker management, and access to self-employment. This is the category catching the most organisations off guard. CV-screening tools, interview scoring systems, automated performance evaluation, promotion algorithms, and task allocation systems all qualify. If your HR department uses AI-powered tools from a vendor, your organisation is a deployer with its own set of obligations. The vendor built it; you still have to comply.
5. Access to essential private and public services. Credit scoring. Life and health insurance risk assessment and pricing. AI used to evaluate eligibility for public assistance benefits, including healthcare. Emergency call evaluation and triage systems, including priority dispatching for police, firefighters, and medical services. Banks and insurers already operating under sectoral regulation will find this overlaps significantly with existing requirements, but the AI Act adds transparency and human oversight layers that go beyond current financial services rules.
6. Law enforcement (where permitted under EU or national law). Victim risk assessment, polygraph-type tools, evidence reliability evaluation, re-offending risk assessment, and profiling in criminal investigations. Mostly relevant to public sector, but private vendors supplying these systems are classified as providers and carry the heaviest obligations.
7. Migration, asylum, and border control (where permitted under EU or national law). Polygraph-type tools, risk assessment of persons entering or having entered EU territory, processing of asylum and visa applications, and detection or identification of natural persons in the migration context. Again, primarily public sector, but the supply chain obligations reach private technology providers.
8. Administration of justice and democratic processes. AI systems used to assist judicial authorities in researching and interpreting facts and law, and systems intended to influence election or referendum outcomes. Campaign logistics tools (scheduling, administrative optimisation) are explicitly excluded.
The Commission was required to publish guidelines on high-risk classification by 2 February 2026, including practical examples of what is and isn’t high-risk. That deadline was missed. As of April 2026, the guidelines have not been finalised. Companies are classifying without official guidance, which is a polite way of saying the Commission expects you to comply with rules it hasn’t finished explaining.
The exceptions that might save you (Article 6(3))
Not every system touching an Annex III area automatically qualifies as high-risk. Article 6(3) provides a narrow escape: if the AI system performs a “narrow procedural task,” improves the result of a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task for an assessment, it may be exempt.
The catch: the provider must document why the exemption applies and notify the relevant market surveillance authority before placing the system on the market. This is not a quiet opt-out. It requires active justification.
In practice, this exemption is harder to claim than it looks. A CV-screening tool that “just” ranks candidates is still replacing human assessment of those candidates. And any system that profiles natural persons is always considered high-risk, regardless of the Article 6(3) conditions. The safe harbour is narrower than vendors would like you to believe.
What high-risk classification means for your compliance budget
Providers of high-risk AI systems must implement a quality management system, conduct a conformity assessment (self-assessment for most Annex III systems, third-party assessment for remote biometric identification), register in the EU database, maintain technical documentation that would make a pharmaceutical company feel at home, and ensure post-market monitoring. The conformity assessment process takes 6 to 12 months for a well-prepared organisation.
Deployers have a lighter but still substantial burden: conduct a fundamental rights impact assessment before using the system, ensure human oversight as specified by the provider, monitor the system in operation, and report serious incidents.
The penalty structure reinforces how seriously the Commission takes this tier. Non-compliance with high-risk obligations carries fines up to €15 million or 3% of global annual turnover, whichever is higher. For a company with €500 million in revenue, that ceiling is €15 million. The fines article covers the full penalty breakdown by violation type.
Not sure where you stand? Take the free AI Act Readiness Assessment.
What to do now
Inventory your AI systems this quarter. Every tool that uses machine learning, automated decision-making, or predictive analytics needs to be catalogued. Include vendor-supplied tools your HR, finance, and operations teams purchased without IT involvement. Those are the ones most likely to be high-risk and least likely to be on anyone’s radar.
Map each system against Annex III. Go category by category. Be honest about what your tools actually do, not what the vendor’s marketing says they do. A “talent analytics platform” that scores candidates is a high-risk AI system. Call it what it is.
Contact your vendors. If you deploy a third-party AI system that falls under Annex III, you need the provider’s technical documentation and conformity declaration. Start those conversations now. Providers who can’t deliver this paperwork by August 2026 are a compliance liability.
Budget for conformity assessment. If you’re a provider, the assessment process requires dedicated resources. If you’re a deployer, the fundamental rights impact assessment and monitoring obligations still require time, tools, and people. Neither is free.
The high-risk deadline is 2 August 2026. That is 16 months of preparation compressed into whatever your organisation can mobilise. Starting in Q3 2026 is not starting. It’s panicking.
RegDossier
Making EU compliance almost enjoyable. Almost.
Get the Tuesday briefingFree. One email per week. Unsubscribe anytime.