Is My SaaS a High-Risk AI System Under the EU AI Act? A Classification Decision Tree
Step through the Annex III classification decision tree for SaaS products. Includes the 4 cases most SaaS founders get wrong and what to do once classified.
The EU AI Act's penalties apply to any company that provides or deploys a system to users in the EU. "High risk" is not a judgment about your company's values or intentions — it is a binary classification that triggers Articles 9–15 obligations and, from August 2026, exposes you to fines of up to €30 million or 6% of global turnover. Getting the classification wrong in either direction is expensive. Under-classifying means non-compliance with the full Chapter III obligations. Over-classifying wastes engineering time on requirements that do not apply to you. Here is the decision tree.
Run a free scan on your site at Regulatory Signals to surface which of these apply to you.
What "High-Risk" Actually Means
Article 6 of Regulation (EU) 2024/1689 defines two routes to high-risk classification.
Route 1: Annex I — AI systems that are safety components in products already regulated by EU product safety legislation: medical devices, machinery, aircraft, vehicles, marine equipment, rail systems, in vitro diagnostic devices, and civil aviation systems. If your SaaS is not a safety component embedded inside one of these regulated physical products, Route 1 does not apply to you. Most pure-software SaaS falls here.
Route 2: Annex III — Stand-alone AI applications in 8 specific domains. This is where the majority of SaaS misclassifications happen, in both directions. Teams either assume they are in scope when they are not, or assume they are out of scope when their use case clearly falls within one of the 8 domains.
The 8 Annex III domains:
- Biometric identification and categorisation of natural persons (Annex III, point 1)
- Management and operation of critical infrastructure (Annex III, point 2)
- Education and vocational training (Annex III, point 3)
- Employment and workers management, access to self-employment (Annex III, point 4)
- Access to and enjoyment of essential private services, public services, and benefits (Annex III, point 5)
- Law enforcement (Annex III, point 6)
- Migration, asylum, and border control management (Annex III, point 7)
- Administration of justice and democratic processes (Annex III, point 8)
The Decision Tree (Step Through This)
Work through these steps for each AI feature in your product, not for your product as a whole. A single SaaS product can have features with different classifications.
Step 1: Is your AI feature embedded as a safety component inside a product regulated by EU product safety legislation (medical device, vehicle, aircraft, machinery, rail system)?
- YES: Annex I applies. Consult the product-specific sectoral legislation alongside the AI Act. This analysis is outside the scope of this decision tree.
- NO: Proceed to Step 2.
Step 2: Does your AI feature make or materially influence decisions that fall within one of the 8 Annex III domains listed above?
The key phrase is "materially influence." If your AI system's output is a direct input to a consequential decision — even if a human nominally makes the final call — it likely qualifies. A scoring system that ranks candidates and is then reviewed by a recruiter is still influencing the employment decision.
- NO: Your feature is not high-risk under Annex III. You likely have limited-risk transparency obligations under Article 50 (chatbot disclosure, deepfake labelling), but Chapter III obligations do not apply. Document this classification explicitly in your risk register.
- YES: Proceed to Step 3.
Step 3: Is this AI system deployed only in a research and development context, with no end users and no consequential decisions made using its output?
Article 6(3) provides a narrow R&D exemption. It requires that the system is not yet placed on the market or put into service.
- YES: The R&D exemption may apply. Document the development-only status explicitly and review at each release milestone.
- NO: Your feature is high-risk. Work through the 12-item compliance checklist before the August 2026 deployment deadline.
The 4 Cases SaaS Teams Mis-Classify Most Often
Case 1: HR scoring and candidate-ranking tools
Annex III, point 4(a) covers AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, and evaluating candidates in the course of interviews or tests. A "skills assessment" tool, a "candidate scoring" API, or a hiring-funnel ranking system all fall here.
The critical legal language is "intended to be used for." Intent is determined by your marketing copy, your documentation, your sales messaging, and your product's designed purpose — not the technical architecture. A tool marketed as "helping HR teams identify top candidates" is intended for recruitment regardless of how the underlying model was built. B2B distribution does not remove the classification: if your customer uses the output to influence which candidates advance, you are a high-risk provider.
Case 2: Insurance or financial risk scoring sold as a SaaS component
Annex III, point 5(c) covers AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, except for the detection of financial fraud. Point 5(b) covers AI used in life and health insurance pricing.
If your SaaS outputs risk scores, pricing recommendations, or eligibility determinations that insurers or lenders feed into their decisions, you are in scope. Being one step removed from the final decision does not exempt you — your output is the input to that decision. The fact that your customer makes the final call does not make you a deployer rather than a provider. You built the system that produces the risk score.
Case 3: Education assessment or certification AI in B2B products
Annex III, point 3(a) covers AI systems intended to be used to determine access to educational or vocational training institutions or to assess students for educational purposes. This includes skills certification platforms, automated essay scoring, proctoring software that influences exam outcomes, and any system used by universities, employers, or training providers to determine whether someone passes a qualification gate.
The B2B wrapper does not change the classification. If the universities or employers using your product are using your AI output to make decisions about individuals' access to educational programmes or qualifications, your system is high-risk. Market it and build it accordingly.
Case 4: Any AI embedded in benefits eligibility or public service access decisions
Annex III, point 5(a) is broad: it covers AI intended to be used by or on behalf of public authorities, or by private entities, to evaluate the eligibility of natural persons for public benefits and services. Government SaaS, civic tech platforms, and any tool used by housing authorities, welfare administrators, or social services fall here. This is one of the most frequently overlooked classifications by govtech SaaS founders.
Not sure which category applies? Run the free scan to get a preliminary EU AI Act risk classification for your stack in 30 seconds.
What To Do Once You Have Classified
If high-risk: The August 2026 deadline is firm. Work through the 12 engineering items in the SaaS compliance checklist systematically. National competent authorities in France (CNIL/ANSSI), Germany (BNetzA), and the Netherlands (ACM/AP) have named their enforcement bodies and are actively preparing. Do not assume enforcement starts in August — investigations for pre-August conduct by already-deployed systems are possible.
If not high-risk: You almost certainly have limited-risk obligations under Article 50 of Regulation (EU) 2024/1689. Article 50(1) requires chatbots to disclose that users are interacting with an AI. Article 50(4) requires disclosure when AI-generated content is used for deepfakes. These are narrow, specific obligations — not the full Chapter III stack — but they are in force from August 2026 and enforcement is not limited to high-risk systems.
Document your classification explicitly. A written classification rationale, stored in your risk register, is evidence of good-faith compliance if a regulator later questions your classification decision.
Related
- EU AI Act Enforcement Starts August 2026 — enforcement timeline and penalty structure
- EU AI Act Compliance Checklist for SaaS — the 12-item action list once you have classified as high-risk
- EU AI Act vs GDPR for SaaS — how AI Act obligations stack alongside GDPR
The classification question is a 5-minute exercise with the tree above. The documentation work that follows takes longer — the audit pack generates the technical documentation, risk summary, and conformity checklist once you have classified your system, pre-filled with findings from your live site.
Regulatory Signals
Scan your site or AI system now
Detect trackers, check legal page adequacy, classify EU AI Act risk, and generate policy documents — in minutes.
Run a free scan