Methodology

Regulatory Signals is a compliance evidence collection and gap analysis tool. This page explains what we scan, which frameworks we check against, how we derive confidence scores, and — critically — where our output ends and qualified legal review begins.

What we are

  • A regulatory signal monitoring tool
  • A technical evidence gathering layer
  • A compliance gap identification service
  • A draft documentation generator for legal review
  • A continuous monitoring system for compliance drift

What we are not

  • A law firm or legal advice provider
  • A certification or accreditation body
  • A guarantee of regulatory compliance
  • A substitute for qualified legal counsel
  • An official regulator or enforcement authority

Regulatory Sources

Scraped and indexed on a daily refresh cycle

Compliance Frameworks

Specific articles and obligations checked per scan

GDPRGeneral Data Protection Regulation

Scope: Any organisation processing personal data of EU/EEA residents

Art. 5 — Data processing principlesArt. 13 — Information to be providedArt. 17 — Right to erasureArt. 25 — Data protection by designArt. 30 — Records of processing activitiesArt. 35 — Data protection impact assessment
EU AI ActEU Artificial Intelligence Act (2024/1689)

Scope: Providers and deployers of AI systems in the EU market. Enforcement begins August 2, 2026.

Art. 6 — Classification of high-risk AIArt. 9 — Risk management systemArt. 11 — Technical documentationArt. 13 — Transparency and information provisionArt. 17 — Quality management systemArt. 26 — Obligations of deployersAnnex I — AI techniques and approaches
CCPA / CPRACalifornia Consumer Privacy Act & California Privacy Rights Act

Scope: Businesses processing personal data of California residents above defined thresholds

§ 1798.100 — Right to know§ 1798.105 — Right to delete§ 1798.110 — Right to information§ 1798.120 — Right to opt-out§ 1798.135 — Notice requirements
DORADigital Operational Resilience Act (2022/2554)

Scope: Financial entities and their ICT service providers operating in the EU

Art. 5 — ICT risk management frameworkArt. 17 — ICT-related incident classificationArt. 19 — Reporting major ICT incidentsArt. 28 — Third-party risk management

Confidence Model

How we score classification certainty and when we flag for review

Every AI system scan produces a confidence rating alongside the risk classification. This reflects how certain the analysis is, not how severe the risk is. Medium and low confidence results are always flagged for human review — we apply a conservative bias, classifying up when ambiguous between adjacent risk levels.

High
Strong signal match against regulatory text. Low ambiguity. Suitable as preliminary evidence for legal review.
Medium
Partial signal match or context-dependent interpretation. Flagged for human review before relying on for compliance decisions.
Human review
Low
Weak or ambiguous signal. High uncertainty. Always flagged for qualified reviewer. Do not use as standalone compliance evidence.
Human review

Human Review Process

  1. 1

    Scan & classify

    Regulatory Signals scans your website or GitHub repository and generates a technical evidence report and risk classification.

  2. 2

    Review findings

    Your team reviews the gap analysis. High-confidence findings can be prioritised. Medium/low-confidence findings are explicitly flagged for deeper review.

  3. 3

    Generate draft documents

    Use the generated policy drafts as a starting point. These are tailored to your actual technical footprint — not generic templates.

  4. 4

    Legal counsel signs off

    A qualified attorney reviews, edits, and approves all documents before use. Regulatory Signals provides the evidence; your legal team provides the judgment.

  5. 5

    Monitor for drift

    Continuous monitoring alerts you when new trackers appear, your compliance score drops, or new regulatory signals are published that affect your classification.

Known Limitations

Jurisdiction nuance: Regulatory interpretation varies by member state and sector. Our framework checks cover the primary articles; local guidance from national DPAs may impose additional requirements not captured here.

Dynamic websites: Scanner accuracy depends on the state of the site at scan time. Single-page apps, A/B tests, and geo-targeted content may produce different results on different runs.

Code analysis depth: AI system scans analyse repository structure and dependencies. Proprietary model weights, internal APIs, and obfuscated code cannot be inspected.

Regulatory lag: We publish signals within 24–48 hours of source publication. Urgent guidance or enforcement decisions may not be indexed immediately.

No certification: A scan result, compliance score, or generated document does not constitute certification of any kind under any regulation.

Questions about our approach?

We publish our source list and framework coverage publicly. If you believe a source or article is missing, reach out.