If you are building AI systems for the European market, you just got a little room to breathe—but also strict new rules to watch out for.
On March 26, 2026, the European Parliament adopted its position on a simplified ("omnibus") proposal amending the original EU AI Act. The proposal passed with a decisive 569 votes in favor, 45 against, and 23 abstentions.
The biggest news for the tech sector? The Parliament has officially delayed the application of certain rules for High-Risk AI systems.
Here is a detailed breakdown of exactly what changed, the new fixed deadlines you need to track, and how this impacts your technical compliance roadmap.
⏳ Why is the EU Delaying the AI Act for High-Risk Systems?
Regulators have recognized that both the tech industry and the enforcement bodies need more time for proper technical preparation.
Members of the European Parliament (MEPs) delayed the rules for High-Risk AI systems to ensure that the necessary guidelines and technical standards—which companies need to actually implement the law—are fully ready. By passing these amendments, MEPs have introduced fixed application dates to guarantee legal certainty and market predictability.
Furthermore, to prevent regulatory overlap, the Parliament clarified that AI Act obligations can be less stringent for products that are already heavily regulated by existing EU sectoral laws (such as medical devices, radio equipment, and children's toys).
Primary source
This article is based on the European Parliament press release on the adopted position and next steps. Artificial Intelligence Act: delayed application, ban on nudifier apps.
📅 The New Timelines: What Changes for High-Risk AI?
Instead of relying on vague, general transition periods, the EU has now introduced highly precise, fixed dates. If your AI system falls into a High-Risk category, here is when the new rules become strictly mandatory:
2 December 2027
This is the new deadline for High-Risk AI systems explicitly listed in the regulation. This covers systems utilizing biometrics, as well as AI deployed in critical infrastructure, education, employment, essential private and public services, law enforcement, justice, and border management.
2 August 2028
This extended deadline applies specifically to AI systems that are already covered by existing EU sectoral legislation regarding safety and market surveillance.
🏷️ The Watermarking Deadline (Article 50)
If your application generates text, audio, images, or video, the transparency rules under Article 50 remain a critical priority.
MEPs are pushing to give providers of generative AI systems until November 2, 2026, to fully comply with "watermarking" rules. The explicit goal of this mandate is to ensure that any AI-generated visual, audio, or textual content clearly indicates its artificial origin to the end user.
🚫 The Ban on "Nudifier" Apps: What it Means for Developers
The sharpest new addition to the legislation targets generative models focused on image manipulation.
The European Parliament is introducing a total ban on so-called "nudifier" systems. These are AI systems used to create or manipulate intimate and sexually explicit images that resemble a recognizable, real person without their prior, explicit consent.
The Technical Exception (Guardrails): There is one critical exception that gives engineering teams a clear mandate. This ban does not apply to AI systems that possess effective technical safeguards physically preventing users from generating such explicit images. Therefore, if you are building an AI image generator, implementing strict, un-bypassable safety filters is no longer just a best practice—it is now the only way to keep your tool legal within the EU.
⚖️ What This Means for Your Compliance Strategy
A delay in the High-Risk enforcement deadlines is fantastic news for startups and enterprise development teams, but it is not a free pass to ignore the law.
The technical adaptation process—drafting Annex IV documentation, hardcoding "human-in-the-loop" overrides, and validating risk classifications—takes months to execute correctly. The legal teams and tech companies that use this extra time to build a clear compliance architecture without the stress of an immediate deadline will have a massive competitive advantage during procurement and audits.
Ready to simulate your technical audit?
The laws may be shifting, but the requirement for a documented, transparent AI architecture remains exactly the same. You do not have to wait for the 2027 deadlines to find out where your system is legally or technically vulnerable.
Test your infrastructure today. Generate a structured gap analysis and find out exactly which risk category your tool falls into under these updated rules.
Sources and further reading
- European Parliament: Artificial Intelligence Act — delayed application, ban on nudifier apps
- The EU AI Act Audit Process: A Step-by-Step Guide
This article is informational and does not constitute legal advice.

