The EU AI Act governs the development and use of artificial intelligence within the EU to ensure safety, transparency and protection of fundamental rights. It applies a risk-based regulatory approach depending on the intended use and potential impact of AI systems.
The EU Artificial Intelligence Act is the world’s first comprehensive legal framework for artificial intelligence. Adopted in 2024, it introduces a risk-based approach to ensure AI systems used in the EU are safe, transparent and respect fundamental rights. Organizations that develop, deploy or use AI systems must now demonstrate compliance across governance, data, technical performance and post-market monitoring.
We support you at every stage of your EU AI Act journey. From early gap analysis and governance alignment to technical testing, conformity assessment and certification, our end-to-end services help you meet regulatory obligations, reduce risk and build confidence in responsible AI.
Understanding the EU AI Act risk categories
The EU AI Act classifies AI systems into four categories based on risk:
- Prohibited AI practices: including social scoring and manipulative techniques
- High-risk AI systems: such as medical devices, autonomous vehicles and credit scoring
- Limited-risk AI systems: subject to transparency obligations
- Minimal-risk AI systems: with no specific regulatory requirements
High-risk AI systems are subject to strict requirements, including conformity assessment, technical documentation, risk management, human oversight and post-market monitoring. Early preparation is essential to maintain EU market access and avoid penalties.

Discover the benefits of EU AI Act compliance services from SGS
- Navigate complexity with confidence
We help you understand which EU AI Act categories apply to your AI systems and what actions are required.
- Identify risks early
Our structured gap analyses highlight weaknesses across governance, data, models and processes.
- Prove trustworthy AI
Independent testing and certification demonstrate safety, transparency and accountability.
- Build internal capability
Training and advisory services equip your teams to manage compliance over time.
Our EU AI Act compliance and assurance services
- Gap analysis and readiness assessment
Identify gaps and prioritize actions to align with EU AI Act requirements, including:
- EU AI Act gap analysis, including prEN 18286 assessment
- ISO/IEC 42001 AI management system gap analysis
- ISO/IEC 5259-3 data quality gap analysis
- GDPR alignment for AI data flows
- SGS AI Trust Check interactive self-assessment
- Certification and conformity assessment
Demonstrate compliance through independent assessment and certification, including:
- ISO/IEC 42001 certification
- ISO/IEC 5259-3 certification
- EU AI Act conformity assessment aligned to harmonized standards
- Common specifications assessment, when applicable
- Technical AI product testing
Validate AI system performance, robustness and trustworthiness across model types and use cases.
We provide:
- Technical AI model and data test protocol, and an evaluation report with reproducible metrics
- Evidence pack: traceability to requirements, results and identified gaps
- Corrective action recommendations
Testing includes:
- In-scope performance and known limitations (e.g. operating conditions and acceptance criteria)
- Robustness and stress testing (e.g. edge cases, perturbations and distribution shift)
- Large language model (LLM) security testing (e.g. prompt injection and jailbreaks)
- Privacy and data leakage testing (e.g. personally identifiable information (PII) leakage checks and inference-style risks)
- Bias and discrimination testing (e.g. subgroup performance and fairness indicators)
- Explainability testing (e.g. local and global explainability, timeliness and relevance)
Supported AI model types:
- LLMs and chatbots
- Computer vision systems
- Recommendation engines
- Deep neural networks (DNNs)
AI trustworthiness pillars assessed:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Transparency
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
- Accountability
- Training and advisory services
Build internal capability and operationalize compliance:
- EU AI Act: From Theory to Practice Implications in Industries
- AI assurance training, including governance and risk management courses
- Tailored EU AI Act compliance workshops
- AI governance technical advisory and enterprise AI risk frameworks

EU AI Act compliance services from a leader in digital trust
As the world’s leading testing, inspection and certification company, we are trusted globally to deliver independent assurance across complex regulatory frameworks. As a Notified Body under multiple EU regulations, including MDR, IVDR, RED and the Machinery Regulation, we provide robust industry-focused EU AI Act compliance and assurance services.
Our services combine regulatory expertise, deep technical capabilities and a global network of digital trust specialists, enabling you to move from readiness to certification with confidence.
Frequently asked questions
Compliance is both a legal and strategic requirement. Failure to comply can lead to substantial fines, product bans and loss of EU market access. Demonstrating compliance builds trust with regulators, customers and stakeholders while reinforcing leadership in responsible AI.
- 2024: EU AI Act becomes law
- 2025: prohibitions on certain AI systems and requirements on AI literacy take effect
- 2025: codes of practice are ready, developed by the industry with member state participation
- 2025: delayed entry into force, with further regulations and compliance requirements
- 2026: high-risk AI systems receive more stringent rules and compliance requirements
- 2027: full compliance for high-risk AI systems and general-purpose AI (GPAI) models that were in use before August 2025
The compliance deadlines differ depending on the AI system. The timeline is:
- Annex III (high-risk systems in sensitive-use areas): August 2, 2026
- AI systems used in sensitive sectors (e.g. biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, asylum, border control, justice and democratic processes)
- Annex I (high-risk AI in regulated products and safety-critical components): August 2, 2027
- AI systems that are regulated products (e.g. medical devices, machinery, toys and radio equipment) or safety components of such products and, therefore, require third-party conformity assessment under those regulations
Requirements include risk management, technical documentation, human oversight, transparency and post-market monitoring. Organizations should prepare now to ensure timely conformity assessment.
Penalties can reach up to EUR 35 million or 7% of global annual turnover. Non-compliance may also result in restricted or prohibited access to the EU market.
ISO/IEC 42001 is the first AI management system (AIMS) standard. It provides a structured framework for governance, risk management and accountability, supporting alignment with EU AI Act requirements and enabling efficient audits and global recognition.