Is Your AI App High-Risk? Real Examples Under the EU AI Act

Damir Andrijanic
Is your AI app high-risk — neon radar and ComplianceRadar.dev branding
ComplianceRadar.dev cover image for high-risk AI app examples under the EU AI Act.

Most developers and founders building AI products today are asking the wrong question. When evaluating compliance, the focus is almost always on the tech stack: "Which model am I using? Is it GPT-4, Claude, or a local Llama model?"

But under the upcoming EU AI Act, the underlying technology is largely irrelevant. Your use case determines everything. More specifically, the exact context in which your AI operates dictates whether your product falls into the dreaded "High-Risk" category. Here is a breakdown of how the EU AI Act classifies risk, along with real-world examples to help you figure out where your SaaS stands.

Why Your Risk Classification Matters

If your system is classified as high-risk, you cannot simply launch it and hope for the best. You are legally required to implement a heavy layer of governance before your product ever touches the European market. This includes:

  • Establishing Risk Management Systems: Continuous identification and mitigation of risks.
  • Human Oversight: Designing UI/UX that allows humans to intervene or override the AI.
  • Extensive Technical Documentation: The notorious "Annex IV" documentation detailing your architecture, data pipelines, and training methods.
  • Logging and Monitoring: Granular traceability to prove exactly how an AI reached a specific output.

Getting this wrong doesn't just mean you are "a bit non-compliant." It exposes your startup to massive regulatory penalties (up to €15 million or 3% of global turnover), blocked deployments, and enterprise clients walking away during procurement audits.

The Risk-Based Approach (Simplified)

The EU AI Act avoids regulating the technology itself and instead divides AI systems into four distinct use-case categories:

Prohibited Risk

Systems that manipulate human behavior or use biometric categorization (these are completely banned).

High-Risk

Systems affecting safety, fundamental rights, employment, or essential services (strict regulation applies).

Limited Risk

Systems like chatbots where the main risk is deception (transparency is required).

Minimal Risk

General applications like spam filters or video games (almost no obligations).

The tricky part for startups is that the line between "Limited" and "High" risk is exceptionally thin.

Real-World Scenarios: Where Does Your App Fit?

Let's break down the legal jargon with practical, everyday SaaS scenarios so you can compare your own architecture.

Example 1: AI Chatbot for Customer Support

The Use Case: An AI agent integrated into your SaaS to answer FAQs, guide users through documentation, and handle basic support automation.

The Classification: Limited Risk.

The Requirements: You simply need to disclose to the user that they are interacting with an AI, not a human.

The Trap: A chatbot is low-risk until it starts making decisions. If your customer support bot is suddenly given the authority to approve refunds or deny user claims based on its own reasoning, it can quickly cross into High-Risk territory.

Example 2: AI for CV Screening & HR Tech

The Use Case: A tool that parses resumes, ranks applicants based on job descriptions, and automates initial hiring decisions.

The Classification: High-Risk (Explicitly listed under Annex III of the AI Act).

The Requirements: Because this directly impacts a person's employment and livelihood, you must implement strict bias monitoring, human-in-the-loop oversight, and full Annex IV technical documentation.

Example 3: E-commerce Recommendation Engines

The Use Case: An algorithm that tracks user behavior to suggest products, recommend content, or personalize the dashboard experience.

The Classification: Minimal Risk.

The Context: Since recommending a pair of shoes or a SaaS feature has no direct impact on fundamental human rights, the EU leaves these systems largely unregulated.

Example 4: Algorithmic Credit Scoring

The Use Case: A fintech integration that determines loan eligibility or assesses a user's financial risk profile.

The Classification: High-Risk.

The Context: Access to financial services is highly protected. Relying on "black box" AI to deny someone a loan requires maximum transparency and rigorous compliance documentation.

Example 5: AI Marketing Content Generation

The Use Case: Utilizing LLMs to write blog posts, generate ad copy, or automate social media scheduling.

The Classification: Minimal to Limited Risk.

The Context: You may need to watermark AI-generated media (like deepfakes or hyper-realistic images), but standard text generation for marketing carries very little regulatory burden.

ScenarioIllustrative tier
Customer support chatbotLimited (watch decision authority)
CV screening / HR automationHigh-Risk
Product recommendationsMinimal
Credit scoringHigh-Risk
Marketing copy generationMinimal to Limited

The Real Problem: Feature Creep

The hardest part of AI Act compliance isn't writing the documentation or adding disclaimers. It is correctly identifying your risk category as your product evolves.

A SaaS product can move from low-risk to high-risk with just one feature update. If you are building AI and targeting the EU market, you need to constantly ask yourself: Does my system influence decisions about people? Does it impact employment, finance, or safety?

If the answer is yes, you are likely already operating in high-risk territory.

How to Know Where You Stand

Most AI products won't fail because of bad code; they'll fail because of misunderstood regulation. The sooner you understand your risk level, the faster you can build and ship with confidence.

Take the 2-Minute AI Act Risk Classification Test

Find out exactly where your SaaS stands and what technical documentation you need to prepare before your next enterprise audit.

Sources and further reading

This article is informational and does not constitute legal advice.