FAQs

FAQs on EU AI Act with Certification Timeline

What is the purpose of the EU AI Act?
The EU AI Act establishes a framework to regulate AI systems based on their risk level. Its goal is to ensure AI systems are safe, transparent, and respect fundamental rights, while fostering innovation.
What are the main categories of AI systems under the Act?
The Act classifies AI systems into four categories:
- **Unacceptable Risk**: Prohibited AI systems, such as those violating fundamental rights.
- **High Risk**: Systems requiring strict compliance, including healthcare and critical infrastructure AI.
- **Limited Risk**: Systems requiring transparency, such as chatbots.
- **Minimal Risk**: Systems with minimal or no risk, such as spam filters.
What is the certification process under the EU AI Act?
Certification ensures that high-risk AI systems meet the regulatory requirements. The process includes:
- **Conformity Assessment**: Verification that the AI system complies with requirements like data quality, transparency, and risk management.
- **Engagement with Notified Bodies**: Independent organizations authorized by EU Member States assess conformity.
- **Self-Assessment**: For certain cases, providers can conduct internal checks to ensure compliance.
Who oversees the certification process?
The certification process is overseen by:
- **Notified Bodies**: Independent third parties designated by Member States.
- **Market Surveillance Authorities**: Ensure ongoing compliance post-certification.
What documentation is required for certification?
High-risk AI providers must prepare the following:
- Technical documentation outlining system design and functionality.
- Risk management systems documenting how risks are identified and mitigated.
- Data quality and governance procedures.
- Transparency documentation detailing user information provision.
What is the role of risk management in certification?
Risk management ensures that high-risk AI systems identify, evaluate, and mitigate risks throughout their lifecycle. This includes:
- Performing hazard analysis.
- Testing robustness and cybersecurity.
- Documenting mitigation strategies.
Are there penalties for non-compliance with certification requirements?
Yes, penalties include:
- Fines up to €35 million or 7% of global annual turnover for prohibited practices.
- Fines up to €15 million or 3% for other certification violations.
- Lower fines for administrative non-compliance.
What is the European Centre for Certification and Privacy (ECCP)?
The ECCP is an organization in Luxembourg that supports certification and compliance efforts under EU regulations. It provides resources and guidance for AI providers navigating certification processes.
How can organizations prepare for certification under the AI Act?
Organizations should:
- Conduct a risk assessment to classify their AI systems.
- Prepare the necessary technical documentation and risk management plans.
- Engage with notified bodies or prepare for self-assessment.
- Monitor updates to the EU AI Act and related certification standards.
What is the timeline for certification under the EU AI Act?
The timeline for certification involves several stages, depending on the complexity of the AI system:
- **Preparation Phase** (6–12 months): Organizations prepare technical documentation, risk assessments, and ensure compliance with the requirements outlined in the AI Act.
- **Initial Review** (1–3 months): Engage with notified bodies or conduct a self-assessment to evaluate compliance.
- **Conformity Assessment** (3–6 months): Notified bodies perform a detailed review of the AI system's design, risk management, and technical documentation.
- **Certification Issuance** (1–2 months): Upon successful assessment, the notified body issues the certification.

The entire process can take approximately 12–24 months, depending on the system's complexity and preparedness of the organization.
Does the alliance use any proprietary data in its AI systems?
No, the alliance does not use any proprietary data in its AI systems. All data utilized is sourced from publicly available datasets or datasets provided with proper consent and compliance with GDPR standards. This ensures transparency, ethical AI development, and adherence to data privacy regulations.
Why are AI benchmarks important in high-risk applications?
AI benchmarks are essential for quantifying model performance, especially in high-risk applications that require compliance with the EU AI Act. They provide standardized metrics for assessing accuracy, explainability, and reliability, ensuring alignment with regulatory expectations.
How does EAI maintain independence in benchmarking?
As an independent entity, EAI conducts unbiased benchmarking to support transparency and ethical AI development. This fosters public trust by ensuring that AI systems adhere to safety and performance standards.
How do AI benchmarks support compliance and risk mitigation?
Benchmarks act as a compliance framework, enabling organizations to identify risks, uphold regulatory standards, and align with the EU AI Act. For data scientists, consistent benchmarking promotes continuous model improvement, reduces risks, and ensures safe, compliant AI applications.
When does the EU AI Act come into effect?
The EU AI Act came into force on 1 August 2024. There now follows an implementation period of two to three years as various parts of the Act come into force on different dates. Our implementation timeline provides an overview of all key dates relating to the Act’s implementation.
During the implementation period, the European standards bodies are expected to develop standards for the AI Act.
Can I voluntarily comply with the EU AI Act even if my system is not in scope?
We encourage voluntary codes of conduct for requirements under Title III, Chapter 2 (e.g. risk management, data governance, and human oversight) for AI systems not deemed to be high-risk. These codes of conduct will provide technical solutions for how an AI system can meet the requirements, according to the system’s intended purpose. Other objectives like environmental sustainability, accessibility, stakeholder participation, and diversity of development teams will be considered. Small businesses and startups will be taken into account when encouraging codes of conduct. See Article 69 on Codes of Conduct for Voluntary Application of Specific Requirements.