What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence systems. It establishes a risk-based classification system and imposes obligations on providers and deployers of AI systems that affect people in the European Union — regardless of where those systems are built.

Who it applies to

The Act applies to:

  • Providers — organisations that develop or commission AI systems placed on the EU market or put into service in the EU.
  • Deployers — organisations that use AI systems under their authority in the EU.
  • Importers and distributors — organisations that bring AI systems from third countries into the EU market.

The Act has extraterritorial reach: a US or UK company that ships a SaaS product with an AI feature to EU customers must comply if that AI output affects EU individuals. The legal test is whether the AI system's output is used in the EU — not where the provider is established.

Risk classifications (with examples)

The Act classifies AI systems into four risk tiers. The tier determines the compliance obligations.

1. Unacceptable risk — prohibited (Article 5)

These AI systems are banned outright:

  • Real-time remote biometric identification in public spaces for law enforcement
  • AI that manipulates people subliminally or exploits vulnerabilities based on age, disability, or social situation
  • Social scoring systems operated by public authorities
  • AI systems that infer emotions of individuals in workplaces or educational institutions
  • Untargeted scraping of facial images from the internet or CCTV footage to build recognition databases

2. High risk — heavy compliance obligations (Annex III)

High-risk AI systems must meet strict requirements before deployment. Examples include:

  • AI used in CV screening or recruitment decisions
  • AI used in credit scoring or insurance underwriting
  • AI used in medical diagnostics or treatment decisions
  • AI used in educational assessment or grading
  • AI used in critical infrastructure management
  • Biometric identification systems not classified as prohibited

Obligations include: conformity assessments, technical documentation, human oversight measures, accuracy and robustness testing, registration in the EU AI Act database, and post-market monitoring.

3. Limited risk — transparency obligations

AI systems that interact with humans must disclose that users are interacting with AI. Examples:

  • Customer-service chatbots and virtual assistants
  • AI-generated content (deepfakes, synthetic text) — must be labelled
  • Emotion-recognition systems (where not prohibited) — must notify users

4. Minimal risk — no mandatory obligations

The majority of AI applications fall here. Examples:

  • Email spam filters
  • AI-powered search ranking
  • Recommendation engines for content or products
  • A/B testing tools
  • AI-assisted writing tools (where output is reviewed by a human)

Minimal-risk systems have no mandatory requirements, but the Commission encourages voluntary codes of conduct.

Key obligations for high-risk systems

  • Risk management system (Article 9): ongoing identification, evaluation, and mitigation of risks throughout the lifecycle.
  • Data governance (Article 10): training, validation, and testing datasets must meet quality criteria.
  • Technical documentation (Article 11): comprehensive documentation must be maintained and made available to authorities on request.
  • Transparency to deployers (Article 13): instructions for use must enable deployers to understand the system's capabilities and limitations.
  • Human oversight (Article 14): the system must be designed to allow effective human oversight and intervention.
  • Accuracy, robustness, and cybersecurity (Article 15): systems must meet appropriate thresholds.
  • Conformity assessment (Article 43): self-assessment or third-party assessment before market placement.
  • Registration (Article 49): standalone high-risk systems must be registered in the EU database.

Enforcement timeline 2025–2027

DateWhat applies
1 August 2024Act entered into force
2 February 2025Article 5 unacceptable-risk prohibitions apply
2 August 2025GPAI model obligations (Annex I) + governance chapter
2 August 2026High-risk obligations (Annex III sectors) apply
2 August 2027Full application of all provisions

How to check your AI system

The quickest way to assess whether your AI system falls under the Act — and which tier it lands in — is to run an automated scan. Regulatory Signals analyses your repository or AI system configuration and produces an Annex III classification with supporting evidence your legal team can work from.

Frequently asked questions

Does the EU AI Act apply outside the EU?

Yes. The Act applies to any provider that places an AI system on the EU market, regardless of where that provider is established. A US SaaS company serving EU customers must comply if its AI output affects EU users. The territorial trigger is use in the EU, not the provider's domicile.

What are unacceptable-risk AI systems?

AI systems that pose unacceptable risks to fundamental rights are banned under Article 5. Examples include real-time biometric surveillance in public spaces, social scoring by public authorities, and AI that manipulates people below their threshold of conscious awareness.

When does enforcement begin?

Article 5 prohibitions applied from 2 February 2025. High-risk obligations under Annex III apply from 2 August 2026. Full enforcement of all provisions completes on 2 August 2027. Fines for prohibited-use violations are up to €35 million or 7% of global annual turnover.

Sources: Regulation (EU) 2024/1689 — EUR-Lex. This page is informational only and does not constitute legal advice.