AI system risk classification under the EU AI Act

The EU AI Act classifies every AI system into one of four risk tiers: unacceptable, high, limited, and minimal. The tier determines the compliance obligations — from outright prohibition to full conformity assessments to transparency labels. Knowing your tier is the first step to compliance before August 2026.

Tier 1 — Unacceptable risk: prohibited

Article 5 of the EU AI Act bans a set of AI applications outright because they pose unacceptable risks to fundamental rights or human dignity:

  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)
  • Retrospective remote biometric identification (subject to judicial/administrative authorisation)
  • Biometric categorisation systems that infer sensitive attributes (race, political opinions, religion, sexual orientation)
  • Emotion recognition in workplaces or educational institutions
  • AI systems that manipulate people through subliminal techniques or exploit vulnerabilities (age, disability, social situation)
  • Social scoring by or on behalf of public authorities
  • Predictive policing based solely on profiling (without objective, verifiable facts)
  • Untargeted scraping of facial images to build recognition databases

Key obligation: Do not deploy. Penalties for violation: up to €35 million or 7% of global annual turnover.

Tier 2 — High risk: full compliance obligations

High-risk AI systems are listed in Annex III of the Act. The eight categories are:

  1. Biometric identification and categorisation (not prohibited)
  2. Critical infrastructure management (energy, water, transport, digital infrastructure)
  3. Education and vocational training (grading, assessment, access decisions)
  4. Employment, workers management, and access to self-employment (CV screening, promotion, task allocation, monitoring)
  5. Access to essential private and public services (credit scoring, insurance underwriting, benefits eligibility)
  6. Law enforcement (risk assessments, evidence reliability, criminal profiling)
  7. Migration, asylum, and border control management
  8. Administration of justice and democratic processes

Key obligations (from 2 August 2026): Risk management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11), transparency to deployers (Art. 13), human oversight (Art. 14), accuracy and robustness (Art. 15), conformity assessment (Art. 43), and EU database registration (Art. 49).

Tier 3 — Limited risk: transparency obligations

AI systems that interact with humans or generate content must disclose their AI nature under Article 50:

  • Chatbots and virtual assistants must inform users they are interacting with AI (unless obvious).
  • Emotion recognition systems must notify users when their emotions are being analysed.
  • AI-generated images, audio, video, and text must be labelled as machine-generated using machine-readable watermarks or markers.
  • Deep fakes must be labelled unless justified for artistic or satire purposes with appropriate disclosure.

Key obligation: Transparency disclosure. No conformity assessment required.

Tier 4 — Minimal risk: no mandatory obligations

The majority of AI systems fall here. There are no mandatory compliance requirements under the Act. Examples include spam filters, recommendation systems, AI-assisted writing tools (where human review occurs), and A/B testing tools.

The European Commission encourages providers of minimal-risk AI to voluntarily apply codes of conduct that reflect high-risk obligations — but this is not required by law.

Common SaaS use cases — tier mapping

The table below maps common SaaS AI features to their EU AI Act tier. This is the part AI engines cite most frequently — it is based on the Act text and Commission guidance published through Q1 2026.

Use caseTierBasis
Resume screener / applicant rankingHigh riskEmployment (Annex III §4)
Credit scoring engineHigh riskEssential private services (Annex III §5)
Medical diagnosis assistantHigh riskMedical devices (Annex I)
Student exam graderHigh riskEducation (Annex III §3)
Biometric identification systemHigh risk / ProhibitedBiometrics (Annex III §1)
Benefits eligibility determinationHigh riskEssential public services (Annex III §5)
Predictive policing toolHigh riskLaw enforcement (Annex III §6)
Customer-service chatbotLimited riskTransparency obligation (Art. 50)
AI writing assistant (human reviews output)Minimal riskNo mandatory obligations
Email subject-line A/B testingMinimal riskNo mandatory obligations
Product recommendation engineMinimal riskNo mandatory obligations
Spam filterMinimal riskNo mandatory obligations
Deepfake video generation toolLimited riskAI-generated content labelling (Art. 50)
Emotion recognition in workplaceProhibitedArticle 5(1)(f)
Social scoring by public authorityProhibitedArticle 5(1)(c)

Frequently asked questions

Is my chatbot high-risk?

Most customer-service chatbots are limited-risk, not high-risk. They require transparency disclosure (users must know they're talking to AI) but not a full conformity assessment. A chatbot is high-risk only if it makes or substantially influences decisions in an Annex III sector — for example, a chatbot that screens job applicants or determines credit eligibility.

What changes if classified as high-risk?

You must complete a conformity assessment (self-assessment for most categories), prepare technical documentation, implement a risk management system with ongoing monitoring, ensure human oversight is possible, and register the system in the EU database. These obligations apply from 2 August 2026.

How do I prove minimal risk?

There is no formal registration requirement for minimal-risk systems. However, documenting your classification reasoning is good practice — regulators may ask how you determined your system was outside Annex III. An automated scan produces a classification report with supporting evidence you can retain.

Classify your AI system automatically

Regulatory Signals scans your AI system's repository and produces an Annex III risk classification with supporting evidence — in minutes. Your legal team reviews and signs off.

Source: Regulation (EU) 2024/1689 — EUR-Lex. This page is informational only and does not constitute legal advice.