How to audit an AI system for EU AI Act compliance
The EU AI Act (Regulation 2024/1689) imposes structured compliance obligations on high-risk AI systems. This guide walks through a practical seven-step audit process — from determining whether the Act applies to producing the documentation your legal team and regulators will ask for.
Step 1.Determine whether the Act applies
The Act applies to providers placing AI systems on the EU market and deployers using AI systems in the EU — regardless of where the provider is located. If your AI output affects EU individuals, you are in scope.
Systems used exclusively for military, national security, or research and development (before market placement) are excluded. Personal, non-professional use is also excluded.
Step 2.Classify your AI system's risk tier
Map your system to one of four tiers:
- Unacceptable (Article 5): Check for prohibited uses — biometric surveillance, social scoring, subliminal manipulation.
- High risk (Annex III): Eight categories — biometrics, critical infrastructure, education, employment, essential private services, law enforcement, migration, justice.
- Limited risk: Transparency obligations only — chatbots, emotion recognition, AI-generated content.
- Minimal risk: No mandatory requirements — spam filters, recommendation engines, writing assistants.
Step 3.Map obligations to your risk tier
High-risk: Technical documentation (Art. 11), risk management system (Art. 9), data governance (Art. 10), human oversight (Art. 14), accuracy and robustness (Art. 15), conformity assessment (Art. 43), EU database registration (Art. 49).
Limited risk: Transparency disclosure to users. AI-generated synthetic content must be labelled under Article 50.
Minimal risk: No mandatory requirements. Voluntary codes of conduct encouraged.
Step 4.Conduct a technical audit of the AI system
Audit the system's code, model documentation, data pipeline, and deployment configuration. Surface:
- AI libraries and model providers in use
- Data types flowing into the model (personal data, biometric, financial, health)
- Whether autonomous decisions are made without human review
- Logging and monitoring coverage for Article 9 risk management
An automated repository scan can surface these signals in minutes. Scan your repository →
Step 5.Prepare technical documentation (Article 11)
High-risk systems must maintain documentation covering:
- System description and intended purpose
- Design specifications and training methodology
- Performance metrics across relevant population groups
- Known limitations and foreseeable risks
- Human oversight mechanisms
- Post-market monitoring plan
Documentation must be kept up to date and available to national competent authorities on request.
Step 6.Conduct a conformity assessment
Most Annex III high-risk systems may use internal self-assessment. A subset (biometric identification, critical infrastructure, certain law enforcement uses) require a notified body. Document the assessment methodology, results, and corrective actions. Maintain the declaration of conformity.
Step 7.Produce and maintain audit evidence
Compile the evidence package a regulator will request:
- Technical documentation
- Risk management records
- Data governance records
- Human oversight configuration evidence
- Conformity assessment declaration
- EU database registration confirmation
Keep it updated as the system evolves — material changes may trigger a new conformity assessment.
Run an automated AI system audit
Regulatory Signals scans your repository and produces an Annex III risk classification with supporting evidence — in minutes, not weeks. Your legal team reviews and signs off.