EU AI Act Fines Explained: Up to €35M or 7% of Revenue
The largest EU AI Act fines reach €35 million or 7% of global annual turnover, whichever is higher. That makes the AI Act’s maximum penalty five times the size of a typical GDPR fine and roughly on par with the biggest antitrust sanctions the Commission has ever issued.
And unlike GDPR, where enforcement took years to gain momentum, the AI Act builds on a decade of regulatory infrastructure that EU authorities have already stress-tested. The enforcement bodies exist. The appetite for large penalties exists. The political will to make an example of somebody exists.
If your organisation uses, develops, or distributes AI systems in the EU, the penalty structure is not theoretical. It determines how much risk you carry for every month you delay compliance.
Three tiers of EU AI Act fines
The AI Act organises penalties into three bands based on the severity of the violation. This is not a sliding scale. It is a hard classification: what you did wrong determines which band applies.
Tier 1: Prohibited AI practices. Up to €35 million or 7% of worldwide annual turnover. This covers the categories banned outright under Article 5: social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces and schools. If you deploy a system that falls into these categories, this is the band that applies. The turnover calculation uses the preceding financial year, consolidated at group level.
Tier 2: High-risk non-compliance. Up to €15 million or 3% of global turnover. This applies to providers of high-risk AI systems (Annex III) who fail to meet the requirements in Articles 8 through 15. That includes obligations around risk management systems, data governance, technical documentation, human oversight, accuracy, and cybersecurity. Deployers of high-risk systems who violate their obligations under Article 26 also fall here.
Tier 3: Incorrect or misleading information. Up to €7.5 million or 1% of global turnover. This is for supplying incorrect, incomplete, or misleading information to national authorities or notified bodies during conformity assessments or market surveillance. Lying to your regulator, in other words. The EU has decided that dishonesty deserves its own penalty category. A refreshingly specific legislative priority.
How fines are calculated in practice
National market surveillance authorities decide the actual amount, not the Commission. The Regulation sets maximum ceilings. Each member state’s enforcement body has discretion within those ceilings.
Article 99 lists the factors authorities must consider: the nature, gravity, and duration of the infringement. Whether the organisation cooperated. Whether they took corrective action. Whether they have prior violations. The size and market share of the company. Any gains obtained from the infringement. These criteria mirror GDPR’s approach under Article 83, which means EU regulators already have a decade of practice applying them.
The “whichever is higher” formula matters enormously. For a startup with €2 million in revenue, 7% is €140,000. The €35 million floor does not apply because the percentage is calculated against turnover, with the flat euro amount as the alternative. But for a company with €500 million in turnover, 7% is €35 million, so the two figures converge. For a company with €10 billion in turnover, 7% means €700 million. No cap.
One detail that compliance teams keep overlooking: the turnover calculation is group-level, not entity-level. A subsidiary deploying prohibited AI exposes the entire parent company’s global revenue to the percentage calculation.
SMEs and startups get a different formula
The AI Act flips the calculation for small and medium-sized enterprises, including startups. For normal companies, the fine is “whichever is higher” between the fixed euro amount and the turnover percentage. Article 99(6) reverses this for SMEs: the fine is “whichever is lower.”
In practice, this means the percentage almost always wins. An SME with €2 million in revenue facing a Tier 1 violation pays up to 7% of turnover (€140,000), not the €35 million flat amount. The fixed euro ceiling that terrifies large corporations becomes irrelevant. For Tier 2, the same SME faces up to €60,000 (3% of €2 million) instead of €15 million.
This is not immunity. An SME deploying a prohibited AI system will still face enforcement. €140,000 can be existential for a seed-stage startup. And the obligation to cease the practice and the reputational damage remain identical regardless of company size.
Not sure where you stand? Take the free AI Act Readiness Assessment.
When enforcement actually starts
The prohibited AI practices ban took effect on 2 February 2025. Fines for violations of Article 5 can theoretically be imposed now. In practice, most member states are still designating their national competent authorities, so enforcement capacity is limited.
The high-risk obligations under Annex III take effect on 2 August 2026 (pending the outcome of ongoing delay discussions around the Digital Omnibus package, which could push this to December 2027). Penalties for Tier 2 violations begin on the same date.
The general-purpose AI model obligations apply from 2 August 2025. Tier 2 penalties apply to GPAI providers from that date.
Each member state must establish an AI regulatory sandbox and designate at least one market surveillance authority. Germany has assigned the Federal Network Agency (BNetzA). France designated the CNIL. Most other member states are still sorting out jurisdictions, budgets, and staffing. Enforcement will not be uniform across the EU for several years. But the penalties are uniform from day one.
What your organisation should do now
Stop treating the AI Act penalty structure as a distant risk. The prohibited practices ban is already enforceable. High-risk obligations are months away.
Map your AI systems against Annex III. Every AI system your organisation uses, deploys, or provides needs to be classified. If any system falls into the high-risk categories, you are already in scope for Tier 2 fines once the deadline passes.
Audit for prohibited practices immediately. Article 5 violations carry the highest fines and are enforceable now. Emotion recognition in the workplace, AI-driven social scoring, manipulative dark patterns targeting vulnerable users. If any of your systems touch these categories, remediation is urgent.
Document everything. Tier 3 fines exist because regulators expect accurate information during oversight. Incomplete technical documentation, misleading conformity declarations, or gaps in your risk management records create standalone liability. Even if the underlying AI system is compliant, poor documentation can trigger penalties of up to €7.5 million.
Set group-level governance. The parent company’s global turnover is the calculation base. If AI compliance decisions are being made at the subsidiary level without group visibility, the financial exposure is invisible to the people who carry it.