AI Transparency
Last updated: March 18, 2026. ComplianceRadar.dev is committed to transparent, human-centric, and accountable AI deployment in line with the European Union AI Act. This technical transparency report summarizes how our AI features are implemented and governed to support both provider and deployer obligations.
1. Introduction and EU AI Act alignment
ComplianceRadar.dev is designed around responsible use of AI for legal and regulatory support. We document our model usage, processing boundaries, limitations, and oversight expectations so users can understand how automated outputs are generated and where human judgment remains essential.
2. AI system architecture
Our compliance analysis layer currently uses Google Gemini (gemini-2.5-flash) via secure server-side API integration. Requests are sent from our backend environment over encrypted channels with restricted API-key access. We do not expose provider credentials to clients.
3. Data processing and retention boundaries
3a. URL scans
During a URL scan, user-submitted URLs and extracted public website content are processed only to generate a compliance assessment. In line with standard API provider terms for commercial model access, this data is not used by ComplianceRadar.dev to train foundational models. We retain structured compliance report outputs needed to deliver the service, while raw scraped page content is not retained as a persistent dataset for model development.
For non-authenticated scans, report viewing is controlled via signed, time-limited access tokens; once a token expires, report access requires a new authorized access path.
3b. Architecture PDF uploads
When a user uploads a technical architecture document (PDF), the raw file bytes and all extracted text artifacts are processed solely to generate a compliance report. Once the report is successfully generated, raw PDF bytes and extracted text are permanently deleted from our systems. We retain only the structured compliance insights — never the original document. Uploaded documents are not used as model training data. All generated reports are tied to the authenticated user session and kept private by default.
4. Intended purpose
The AI component is intended solely for analyzing public web pages and uploaded technical architecture documents against compliance frameworks, including the EU AI Act, GDPR, and ePrivacy requirements. It is not designed for autonomous decision making, profiling individuals, or unrelated predictive tasks.
5. Known limitations and AI hallucinations
AI outputs are probabilistic and may include inaccuracies, including false positives, false negatives, or incomplete interpretations (hallucinations). Results can vary depending on available page content, website structure, document completeness and quality, and ambiguous legal or technical wording.
6. Human oversight (human-in-the-loop)
ComplianceRadar.dev is an assistive co-pilot for compliance review and is not a replacement for qualified legal advice. Users must manually review all AI-generated findings and recommendations before acting on them. For legal decisions, users should consult a certified lawyer or other appropriately qualified professional.
7. Prompt governance
We use structured system prompts to constrain model behavior to compliance-focused analysis and to request machine-readable, schema-conformant outputs. This prompt governance limits drift and keeps generated responses aligned with defined regulatory categories and reporting requirements.
For related controls and data-handling context, see our Security and Privacy Policy.