If you are building an AI product for the European market, this is the number you need to understand:
Up to €35 million — or 7% of global annual turnover.
That is the statutory maximum penalty under the EU AI Act for the most serious violations. These figures are upper limits, not typical fines — but they set the scale regulators can work within.
But what does that actually mean for startups?
And more importantly: what can realistically get you fined?
The headline fines (simplified)
The EU AI Act defines multiple penalty tiers depending on the violation.
Prohibited AI systems
If you deploy AI that is explicitly banned — for example:
- social scoring systems
- manipulative AI targeting vulnerable groups
- certain prohibited biometric uses
Maximum penalties can reach:
Up to €35M or 7% of global annual turnover
High-Risk AI non-compliance
If your system falls under Annex III (High-Risk AI) and you fail to meet requirements — for example:
- missing risk management
- no human oversight
- poor data governance
- lack of documentation
Fines can go up to:
Up to €15M or 3% of global annual turnover
Transparency violations
If you fail to inform users about AI usage (for example chatbots or synthetic content where transparency is required):
Penalties can reach:
Up to €7.5M or 1.5% of global annual turnover
What startups usually misunderstand
Most founders assume:
"We're too small to be fined."
That is not how EU regulation works.
Penalties are designed to scale: large companies often face percentages of turnover; smaller organizations can face fixed amounts. And regulators do not need to target everyone — they only need a few visible examples.
The real risk is not always the fine
For many startups, the biggest risk is not the penalty itself. It is:
- delayed launches
- lost enterprise deals
- failed due diligence
- reputational damage
If a potential client asks: "Are you compliant with the EU AI Act?" And the answer is unclear — you have often already lost the deal.
What triggers problems in practice
From real-world scenarios, most issues come from:
- unclear risk classification
- hidden AI usage
- missing documentation
- no human oversight mechanisms
Usually not malicious intent — just lack of awareness.
How to reduce your risk
Before worrying about fines, you need to understand your position.
Start with:
- identifying your use case
- checking if it falls under Annex III
- mapping required safeguards
This is where most teams struggle.
A faster way to check your exposure
Instead of manually reading legal documents, you can:
- scan your live application
- or upload your system architecture (PDF)
and get a structured view of risk classification, potential gaps, and what to fix next.
See your risk classification in minutes
Scan a live app or upload architecture documentation and get a structured EU AI Act–oriented analysis.
Final thought
The EU AI Act is not designed to punish innovation. It is designed to enforce accountability.
For startups, the message is clear:
Ignorance won't protect you.
Understanding your risk level early is the difference between building with confidence and fixing things under pressure.
For founders
Are you building something that could fall under High-Risk AI? If you are not sure, that is already a signal to check.
Sources and further reading
- Regulation (EU) 2024/1689 (EU AI Act) — EUR-Lex
- The EU AI Act Explained: A Survival Guide for Startups and Developers
- Are You Building a "High-Risk" AI App? | EU AI Act for Developers
This article is informational and does not constitute legal advice.
