AI Transparency
Last updated: March 17, 2026. ComplianceRadar is committed to transparent, human-centric, and accountable AI deployment in line with the European Union AI Act. This technical transparency report summarizes how our AI features are implemented and governed to support both provider and deployer obligations.
1. Introduction and EU AI Act alignment
ComplianceRadar is designed around responsible use of AI for legal and regulatory support. We document our model usage, processing boundaries, limitations, and oversight expectations so users can understand how automated outputs are generated and where human judgment remains essential.
2. AI system architecture
Our compliance analysis layer currently uses Google Gemini (gemini-2.5-flash) via secure server-side API integration. Requests are sent from our backend environment over encrypted channels with restricted API-key access. We do not expose provider credentials to clients.
3. Data processing and zero data retention posture
During a scan, user-submitted URLs and extracted public website content are processed only to generate a compliance assessment. In line with standard API provider terms for commercial model access, this data is not used by ComplianceRadar to train foundational models. We store the structured compliance report needed to deliver the service, while raw scraped page content is not retained as a persistent dataset for model development.
4. Intended purpose
The AI component is intended solely for analyzing public web pages against compliance frameworks, including the EU AI Act, GDPR, and ePrivacy requirements. It is not designed for autonomous decision making, profiling individuals, or unrelated predictive tasks.
5. Known limitations and AI hallucinations
AI outputs are probabilistic and may include inaccuracies, including false positives, false negatives, or incomplete interpretations (hallucinations). Results can vary depending on available page content, website structure, and ambiguous legal wording.
6. Human oversight (human-in-the-loop)
ComplianceRadar is an assistive co-pilot for compliance review and is not a replacement for qualified legal advice. Users must manually review all AI-generated findings and recommendations before acting on them. For legal decisions, users should consult a certified lawyer or other appropriately qualified professional.
7. Prompt governance
We use structured system prompts to constrain model behavior to compliance-focused analysis and to request machine-readable, schema-conformant outputs. This prompt governance limits drift and keeps generated responses aligned with defined regulatory categories and reporting requirements.
For related controls and data-handling context, see our Security and Privacy Policy.