Self-Assessment

Complete our questionnaire to identify compliance gaps and risks, backed by EU AI Act articles

Each question ties to specific articles or annexes, ensuring trust and legal grounding.

Industry / Sector
Relevant to EU AI Act Title III, Chapter 1 (Art. 6–7): classification of high-risk AI systems by sector
*Select one or more options
Company Size
Relevant to Recital 60 + proportionality principle: the Act recognizes that compliance measures must be proportionate to company size and resources, though obligations still apply if system is high-risk.
Who uses it?
Art. 29 (Obligations of Users of High-Risk AI Systems): different obligations apply depending on whether the system is internal-only or deployed to the public.
Does it process personal or sensitive data?
Art. 10 (Data and Data Governance) requires training and validation data to be relevant, representative, free of errors
Check which category your service falls under
*Select one or more options
Is human oversight built into key decisions?
Art. 14 (Human Oversight): high-risk AI systems must be designed to enable effective human oversight to prevent or minimize risks.
Do you provide a consent to contact
Not directly in the AI Act — but GDPR (Regulation (EU) 2016/679) applies, requiring explicit consent for personal data collection and communication.
Do you actively monitor outputs for bias/errors?
Art. 61–62 (Post-Market Monitoring and Corrective Actions): providers must implement monitoring systems and address performance drift, bias, or safety issues after deployment.
Does your company understand the penalties and enforcement measures for non-compliance with the EU AI Act?
From 2 August 2025, Article 99 imposes fines up to €35M or 7% of global turnover for severe violations.
Your Company Website
this email will be used to provide the result of self-assessment
Made on
Tilda