Contact

What are you looking for?

Loading component...

EU AI Act “Omnibus” Update: More Time to Prepare – A Practical Perspective

May 13, 2026

The EU AI Act mandatory compliance deadline for high-risk AI has been pushed back from August 2026 in the newly passed EU Digital Omnibus.

The Council presidency and the European Parliament have provisionally agreed targeted amendments to the EU AI Act, primarily delaying certain requirements until December 2, 2027, and others until August 2, 2028. This allows organizations developing or deploying AI systems more time and clarity, and reinforces the need to continue preparing in a structured way.

What has changed?

The Omnibus package aims to simplify selected aspects of the EU AI Act while maintaining its core principles and risk-based structure. Under the provisional agreement, the key updates are described below.

Extended timelines for high-risk AI systems

The high-risk AI requirements have been postponed as follows:

  • Stand-alone high-risk AI – December 2, 2027
  • High-risk AI embedded in EU-regulated products – August 2, 2028

This does not remove the need to implement the underlying requirements.

Reduced overlap with sectoral legislation

The agreement seeks to reduce duplication between the EU AI Act and existing EU product safety legislation.

For product sectors already governed by EU harmonization legislation, the objective is to avoid situations where manufacturers must comply with overlapping or duplicative requirements under the EU AI Act and sector-specific regulatory frameworks.

Machinery receives specific treatment under the provisional agreement. AI systems that fall within the scope of the Machinery Regulation are expected to be governed primarily through that sectoral framework. In areas of overlap, AI-related health and safety requirements are addressed within the Machinery Regulation rather than under the EU AI Act.

For other regulated product sectors, such as medical devices, toys, lifts or watercraft, the agreement does not fully exclude the EU AI Act. Instead, overlap is expected to be addressed through targeted mechanisms, implementing acts and Commission guidance.

Refined scope of high-risk classification

Systems that support performance without safety impact may not be automatically considered high-risk.

New prohibition on certain AI use cases

AI systems generating non-consensual intimate content are explicitly banned.

The overall intention is to reduce legal uncertainty and give organizations time to prepare, without weakening the core requirements of the EU AI Act.

Harmonized standards are gaining maturity

An important driver behind the Omnibus timeline extension is the ongoing development of harmonized AI standards.

While most standards are still in the draft phase, their maturity is advancing. The quality management system standard (prEN 18286) has reached the approval stage. Key frameworks, such as AI risk management (prEN 18228) and AI cybersecurity (prEN 18282) have entered public enquiry, meaning that requirements are stabilizing through stakeholder consultation.

This evolution confirms that the delay is not a slowdown, but a pragmatic alignment between regulation and technical standards, giving organizations the opportunity to prepare using increasingly mature and structured guidance.

At this stage, organizations can already benefit from targeted gap analysis against these emerging standards, helping to structure preparation and identify priority actions early.

What has not changed?

While timelines have shifted, the EU AI Act fundamentals remain intact:

  • The risk-based classification model still applies
  • High-risk AI systems require robust governance and documentation
  • Conformity assessment remains mandatory before placing such systems on the EU market
  • Accountability and evidence of compliance are still central requirements

Sectoral regulation: clarification rather than full alignment

An important element of the Omnibus package concerns the interaction between the EU AI Act and existing EU sectoral legislation (e.g. machinery, medical devices or other new legislative framework laws).

The agreement introduces a targeted “sector exit” approach:

  • The Machinery Regulation will be transferred to Annex I Section B, meaning it will primarily follow sector-specific conformity frameworks, supported by AI-related health and safety requirements
  • The European Commission is expected to adopt delegated acts under the Machinery Regulation to integrate EU AI Act requirements, with application foreseen by August 2028
  • This approach is limited in scope: while earlier proposals suggested moving all sectoral laws, only machinery is affected at this stage
  • For the remaining legislation listed in Annex I Section A, the Commission has until August 2027 to adopt delegated acts addressing potential overlaps with the EU AI Act

This aligns with the broader Omnibus objective of reducing duplication while maintaining equivalent safety and compliance.

From a practical standpoint, organizations in regulated sectors should expect greater integration between AI requirements and existing product compliance processes, rather than a separate framework.

Why early preparation still matters

The additional time is not a reason to postpone action. Preparing for EU AI Act compliance, particularly for high-risk systems, requires:

  • Mapping and classification of AI systems
  • Establishing governance structures and roles
  • Implementing risk management and control processes
  • Building technical documentation and life cycle monitoring
  • Preparing for conformity assessment and, where relevant, certification

These activities often require cross-functional coordination (legal, technical, risk, quality), as well as alignment with existing standards and regulations.

Organizations that start early are better positioned to:

  • Avoid last-minute compliance challenges
  • Integrate EU AI Act requirements into existing management systems
  • Demonstrate trustworthiness to clients, partners and regulators

In this context, established frameworks, such as ISO/IEC 42001 for AI management systems or ISO/IEC 5259-3 for data quality management systems, provide a structured foundation to integrate EU AI Act requirements into existing governance and compliance processes.

Using the extra time effectively

The extended timeline offers an opportunity to move beyond reactive compliance and adopt a more structured approach. Many organizations are using this period to:

1. Build awareness and internal capability

Understanding the EU AI Act across teams is a critical first step. Training and internal alignment help ensure that AI governance is embedded across the organization – not limited to compliance functions.

2. Identify gaps early

A structured gap analysis highlights differences between current practices and EU AI Act requirements, including governance, data quality, model performance and documentation.

3. Develop a roadmap

With clearer timelines, organizations can plan phased implementation aligned with product development cycles, regulatory milestones and business priorities.

4. Validate AI systems

Testing and validating AI systems – covering robustness, bias, performance and reliability – are increasingly important to demonstrating compliance and building trust.

A supporting role for independent assurance

Independent third-party support can play a valuable role during this preparation phase. SGS DIGITAL TRUST services cover:

  • Training to build an understanding of the EU AI Act and related standards
  • Gap analysis and readiness reviews to identify priority areas
  • Advisory support for governance frameworks and documentation
  • Technical testing and validation services for AI systems

Our EU AI Act Compliance and Assurance portfolio covers the above and more. Our interactive self-assessment tool, AI Trust Check, will quickly evaluate your current maturity against key EU AI Act principles. It is a practical starting point for identifying gaps, prioritizing actions and structuring preparation efforts in a clear and measurable way.

For further information, please contact:

Michal Cichocki

Michal

Cichocki

Global Product Manager – AI Assurance Services

About SGS

SGS is the world’s leading Testing, Inspection and Certification company. We operate a network of over 2,500 laboratories and business facilities across 115 countries, supported by a team of over 100,000 dedicated professionals. With more than 145 years of service excellence, we combine the precision and accuracy that define Swiss companies to help organizations achieve the highest standards of quality, compliance and sustainability.

Our brand promise – when you need to be sure – underscores our commitment to trust, integrity and reliability, enabling businesses to thrive with confidence. We proudly deliver our expert services through the SGS name and a portfolio of trusted specialized brands, including Applied Technical Services, Brightsight, Bluesign and Nutrasource.

SGS is publicly traded on the SIX Swiss Exchange under the ticker symbol SGSN (ISIN CH1256740924, Reuters SGSN.S, Bloomberg SGSN SW).

News & Insights

  • SGS - Hong Kong - Hong Kong Science Park

Units 303 & 305, 3/F, Building 22E,

Phase 3, Hong Kong Science Park,

Pak Shek Kok, New Territories, Hong Kong, China