EU AI Act Compliance Checklist for SaaS: 12 Engineering Actions Before August 2026
Twelve concrete engineering tasks that cover Articles 9–15 of the EU AI Act for SaaS providers and deployers. Mapped to the August 2026 enforcement deadline.
Generic EU AI Act guides assume you are a large enterprise with a dedicated legal team and a year of runway. SaaS teams have a different problem: you are often both provider AND deployer — you built the AI component and you deploy it to customers — the multi-tenant data model complicates Article 10 data governance, and you are shipping under deadline pressure with a small engineering team. This is the concrete engineering checklist.
Run a free scan on your site at Regulatory Signals to surface which of these apply to you.
The Provider/Deployer Question (Read This First)
Articles 3(3) and 3(4) define the distinction that determines your compliance burden.
If you built the AI component — the model, the inference pipeline, the fine-tuning layer — you are a provider. The full Chapter III obligations (Articles 9–15) apply to you before market placement.
If you use AI from a third party (OpenAI, Anthropic, Mistral, Google) embedded in your product as a feature, you are a deployer. Your primary obligations are Articles 26 and 29: ensure the system is used per the provider's instructions, implement human oversight measures, and monitor for risks in your specific deployment context.
Many SaaS teams are both for different features. A team that built a custom NLP classifier is a provider for that feature. The same team using the Anthropic API for a summarisation feature is a deployer for that feature. Each classification is per-system, not per-company.
Identify your classification for every AI feature before starting item 1 below. It determines which items on the list apply to you in full versus in abbreviated form.
The 12-Item Checklist
1. Risk register entry for every AI feature
Article 9 requires a continuous risk management system. This is not a one-time audit document. "Done" looks like: a markdown file per AI feature (docs/risk/[feature].md) listing the risk category, residual risks identified, last-updated date, and named owner. The system must be updated when you retrain, when production drift is detected, or when a new deployment context is added. Tick this item only when the process for updating is written into your development workflow, not just the initial document.
2. Data lineage document for each training set used
Article 10 requires governance over training data. If you used a third-party base model, your "training data" in Article 10 terms is your fine-tuning datasets, RAG corpora, and evaluation sets. "Done" = a document per dataset listing origin, license, any PII content, demographic coverage, and the result of bias testing. If you use only third-party APIs with no fine-tuning, this item reduces to documenting the model version and the provider's data governance commitments.
3. Technical documentation template wired into the feature PR process
Article 11 requires technical documentation to exist before market placement. This is the obligation SaaS teams most frequently miss: the documentation must precede the launch, not follow it. "Done" = a docs/ai-technical-doc-[feature].md template that captures intended purpose, design logic, performance metrics, known limitations, and data inputs. The template is referenced in the PR description template or a checklist so it cannot be bypassed at merge time.
4. Per-tenant logging with 6-month retention
Article 12 requires automatic logging enabling traceability of AI outputs. In a multi-tenant SaaS context, this means logs must be isolatable per tenant, tagged with model version, include a prompt hash (not prompt content — hashing preserves privacy), output ID, and timestamp. Retention of at least 6 months is required. "Done" = logging architecture verified against these fields, retention policy set to ≥6 months, and per-tenant log isolation confirmed in the data model.
5. User-facing AI disclosure on every AI surface
Article 13 requires that deployers provide instructions of use and transparency information. For end users, this means every UI surface showing AI-generated content must disclose it as such. "Done" = a visible This content was generated by AI label (or equivalent) on every AI output surface in the product, plus a user-facing transparency notice in your documentation or help centre covering what AI is used, what it does, and what its limitations are.
6. Human override path documented in the support runbook
Article 14 requires that high-risk AI systems are designed to enable effective human oversight, including the ability to override or disregard output. "Done" = a /docs/human-override.md runbook explaining the exact steps a support agent or product admin can take to override or disable an AI-generated decision for a specific tenant or user, and how that override is logged for audit purposes.
Running behind? Get the full audit pack — it generates the technical documentation, DPIA template, and risk summary for you in 60 seconds.
7. Robustness and adversarial test suite in CI
Article 15 requires that high-risk AI systems achieve appropriate levels of accuracy, robustness, and cybersecurity. For SaaS engineering teams, this translates to automated adversarial tests covering prompt injection, out-of-distribution inputs, and edge-case behaviour. "Done" = at least one automated adversarial test per AI feature committed to tests/ai-robustness/, running in CI, with failures blocking merge.
8. Sub-processor list updated to include AI API providers
Adding any AI API provider (OpenAI, Anthropic, Mistral, Google Cloud AI) to your Privacy Policy and DPA sub-processor list overlaps with GDPR Article 28 obligations — but under the AI Act, failing to disclose these in your technical documentation (Article 11) and user-facing transparency notice (Article 13) is an independent violation. "Done" = sub-processor list updated to include all AI API providers, and both the Privacy Policy and DPA reflect this. One list satisfies both obligations.
9. Incident notification playbook (72-hour clock)
Article 73 covers serious incidents involving general-purpose AI models. Article 62 covers high-risk AI system incidents. Both establish notification obligations to national competent authorities. "Done" = an incident response playbook section specifying: what constitutes a reportable AI incident, who at the company owns the notification, which national authority to contact (varies by EU member state of establishment or primary deployment), and a 72-hour draft notification template that can be populated and sent without legal review delaying it.
10. DPIA template for AI features that process personal data
Article 9 of the AI Act overlaps with GDPR Article 35 for AI systems that process personal data. The EDPB has explicitly encouraged running a combined DPIA + AI Act risk assessment as a single document. "Done" = a completed combined DPIA + AI Act risk assessment per AI feature stored in /docs/dpia/, reviewed when the processing purpose or technical approach changes materially. See the EU AI Act vs GDPR post for detail on the fusion approach.
11. Conformity assessment checklist for high-risk features
Articles 43–49 govern conformity assessment. For most SaaS teams deploying high-risk AI systems, self-assessment is permitted under Annex VI. The assessment must be completed before deploying to EU users and updated when making significant changes. "Done" = a completed self-assessment checklist stored in /docs/conformity-assessment/, with a version and date, linked from the technical documentation for the relevant feature.
12. Monitoring assignment for EU AI Act database and guidance
The EU AI Act registry for high-risk systems is operational. Regulation (EU) 2024/1689 requires registration of high-risk systems before EU deployment. Separately, the European AI Office publishes ongoing guidance that affects implementation obligations. "Done" = a named owner assigned to monitor the EU AI Act registry and the European AI Office guidance at https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence, with a calendar reminder to review quarterly.
The 4 Mistakes SaaS Teams Make First
1. Treating it as a one-time audit
Article 9 uses the phrase "continuous risk management process" deliberately. The risk register is not a gate you pass through before launch — it is an operational artefact that must reflect the current state of the system. Teams that complete items 1–12 above and then archive them will be non-compliant by the time the first model update ships.
2. Assuming deployer-only status when you are actually a provider
The provider/deployer question at the top of this list is not a formality. If your team wrote the prompt engineering layer, the classification logic, or the output-processing code, regulators will treat you as a provider for that component regardless of what third-party model sits underneath it. Get a written legal opinion on the classification for any feature where it is ambiguous.
3. Conflating GDPR DPIAs with AI Act conformity assessments
These are different documents with different scope and different timing requirements. A DPIA is triggered by processing activities involving personal data. A conformity assessment is triggered by deploying a high-risk AI system — regardless of whether it processes personal data. The overlap (item 10 above) is real and exploitable, but the conformity assessment covers technical properties (robustness, accuracy, logging) that a DPIA does not. See our GDPR vs AI Act post for the full breakdown.
4. Waiting for the EU AI Act registry to be fully operational before starting documentation
The documentation obligations in Articles 11 and 12 apply from August 2026 regardless of the registry's operational status. The registry is a registration mechanism, not a pre-condition for compliance. Teams that treat "we'll document once the registry is live" as a plan are confusing two separate obligations.
Related
- EU AI Act Enforcement Starts August 2026 — the enforcement timeline and penalties
- Is My SaaS High-Risk Under the EU AI Act? — classify your system before working through this list
- EU AI Act vs GDPR for SaaS — what is new versus what you already have from GDPR work
See how many of these items your current setup already covers — run the free scan at /demo. For the documentation itself, the audit pack generates all 5 policy documents mapped to what it actually finds on your site.
Regulatory Signals
Scan your site or AI system now
Detect trackers, check legal page adequacy, classify EU AI Act risk, and generate policy documents — in minutes.
Run a free scan