Brussels’ first comprehensive AI rulebook, a 144-page Regulation, backed by €35 million fines, has been overtaken by political pressure and missing standards before its main deadline, with Parliament and Council now negotiating a sixteen-month delay just one day before the decisive trilogue.
When Roberta Metsola and Charles Michel signed Regulation (EU) 2024/1689 on 13 June 2024, they were putting their names to the most ambitious piece of technology law any major jurisdiction has ever produced. The European Union, having missed the smartphone, the cloud and the social network, had decided to lead the world in regulating what came next. The text ran to 144 pages, restructuring eight existing pieces of harmonisation legislation, and gave the European Commission and national authorities the power to fine an AI provider seven percent of its worldwide turnover, which exceeds even the GDPR ceiling. It would, the argument went, be the Brussels Effect applied to artificial intelligence.
Twenty months later, the rulebook remains in regulatory purgatory. The harmonised standards that the Act assumes will translate its principles into engineering practice are not ready. Two-thirds of Member States have struggled to designate the national authorities meant to enforce it. The Commission missed its own February 2026 deadline for guidelines on the high-risk classification, and on 19 November 2025 it formally proposed delaying the most consequential parts of the Act, the obligations on high-risk AI systems, until December 2027. On 26 March 2026, the European Parliament adopted that delay by 569 votes to 45. At the time of publication, April 28th, Council and Parliament negotiators are meeting for the political trilogue that is expected to lock in the new schedule.
In this deep dive, we will explore what the Act actually does, what it could not do, and why a regulation that was supposed to define the European model for AI governance is being rewritten before it has even been enforced.
Premium Users Only
The Architecture
The AI Act is built on a single organising idea, that not all AI systems pose the same risk, so not all of them should face the same obligations. Article 1 sets out the Act’s purpose as ensuring “human-centric and trustworthy AI” while protecting health, safety, fundamental rights, democracy and the rule of law. Beneath that aspirational language sits a pragmatic four-tier structure that determines almost everything else about how the Regulation operates.
At the top of the pyramid sit the prohibitions in Article 5, practices the Union has decided are incompatible with its values and may not be placed on the market at all. Below that, Chapter III governs high-risk AI systems, which carry the bulk of the Act’s regulatory machinery: risk management systems, data-governance requirements, technical documentation, human oversight, conformity assessment and registration in an EU database. Below that again sit the transparency obligations of Article 50, which apply to chatbots, deepfakes and synthetic media regardless of risk classification. And at the bottom, untouched by the Regulation, sits the vast majority of AI systems (spam filters, video-game NPCs, recommender algorithms below the systemic threshold) which the Commission’s 2021 impact assessment estimated would account for 85 to 95 percent of the AI on the European market.
This architecture is the Act’s core innovation, as earlier proposals for AI regulation tended either toward sectoral rules or toward broad principles. The AI Act picks a middle path, with a horizontal regulation, applicable across all sectors, but calibrated by use case rather than by underlying technology. The same large language model can be a minimal-risk system when it generates marketing copy and a high-risk system when it screens job applicants.
Cutting across all four tiers is a separate regime for general-purpose AI models, introduced late in the trilogue negotiations after ChatGPT’s release in November 2022 made the original 2021 proposal obsolete. Articles 51 to 55 govern foundation models directly, regardless of how they are eventually deployed, on the theory that the most consequential AI risks now sit upstream of any specific application.
A List With Holes
The eight categories of practice prohibited by Article 5 entered into application on 2 February 2025, six months after the Act’s entry into force. They are the most politically charged part of the Regulation, as regulators drafted the part where the EU draws the line between AI that may exist in the European market and AI that may not.
Five of the prohibitions are uncontroversial: subliminal manipulation that causes significant harm; exploitation of vulnerabilities of children, the elderly or the economically marginalised; social scoring by public authorities; predictive policing based solely on profiling; and the untargeted scraping of facial images from the internet or CCTV to build face-recognition databases, a practice that made Clearview AI famous and which the Italian and French data-protection authorities had already sanctioned under the GDPR.
Three are more controversial, though. The Act prohibits AI systems that infer emotions of natural persons in workplaces or educational institutions, with a medical-and-safety exception. It prohibits biometric categorisation systems that deduce sensitive attributes, such as race, political opinions, trade-union membership or sexual orientation, from biometric data. And it prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, but with three derogations: the targeted search for victims of abduction or trafficking; the prevention of a specific, substantial and imminent threat to life; and the localisation of suspects for offences punishable by at least four years’ imprisonment.
Each of those derogations was the product of intensive Council negotiation, and each has been the subject of sustained civil-society criticism. The European Digital Rights network argues that the law-enforcement exceptions create a backdoor that effectively negates the prohibition. The European Centre for Not-for-Profit Law has called the prohibitions “riddled with loopholes”, pointing in particular to the absence of a ban on emotion recognition or biometric categorisation outside of workplaces and schools, and to the carve-out for “labelling or filtering of lawfully acquired biometric datasets” in law-enforcement contexts. Article 2(3) goes further, removing from the Regulation’s scope altogether any AI system used “exclusively for military, defence or national security purposes”, a clause the Centre for Democracy and Technology has described as a potential blanket exemption that any Member State could invoke to bring surveillance technologies outside the AI Act’s reach.
The European Parliament added one further prohibition in March 2026 as part of the Digital Omnibus negotiations, addressing a category that did not exist as a recognisable phenomenon when the Act was adopted: AI systems generating sexually explicit or intimate images of identifiable persons without their consent, the so-called nudifier apps. The amendment cleared the joint IMCO-LIBE vote with cross-party support, and Council aligned shortly afterwards. It will likely be the only substantive expansion of the prohibition list to survive the Omnibus process.
Where the Bureaucracy Lives
If the prohibitions are the politically visible part of the Act, the high-risk regime is where the regulatory weight actually sits. Article 6 defines two routes into the high-risk category. The first captures AI systems that function as safety components of products covered by existing Union harmonisation legislation, like medical devices under Regulation (EU) 2017/745, and a second route is Annex III, a list of eight standalone areas where AI systems are presumptively high-risk regardless of any product-safety framework: biometrics, critical infrastructure, education and vocational training, employment, access to essential services, law enforcement, migration and border control, and the administration of justice and democratic processes.
Once an AI system falls into the high-risk category, it acquires substantial obligations. Providers must establish a continuous risk management system across the system’s lifecycle (Article 9). They must apply rigorous data governance, ensuring training, validation and testing datasets are relevant, representative and, to the best extent possible, free of errors and complete (Article 10). They must also draw up technical documentation following the detailed template in Annex IV (Article 11), maintain automatic logging of system events (Article 12), provide instructions for use sufficient for deployers to comply with their own obligations (Article 13), design the system to allow effective human oversight (Article 14), and meet specified levels of accuracy, robustness and cybersecurity (Article 15).
Deployers , which are the entities that actually use a high-risk system, carry their own obligations under Article 26, including the duty to monitor the system’s operation, ensure human oversight by competent staff, and inform affected workers and consumers. For deployers that are public authorities or that provide essential services, Article 27 adds a fundamental rights impact assessment that must be completed before first use of any Annex III system.
The financial weight of these obligations is non-trivial, as the Commission’s own 2021 impact assessment estimated that compliance costs for SMEs would land between six and seven thousand euros per high-risk system, scaling to between 180,000 and 420,000 euros for enterprises operating multiple high-risk systems. The Centre for European Policy Studies put the cost of setting up an entirely new quality-management system, where one does not already exist, at between 193,000 and 330,000 euros, with annual maintenance of around 71,000. Industry analysts tracking the 2026 implementation phase report that those numbers have held up: Big Four advisory fees for AI governance due diligence in mergers and acquisitions are now running between 80,000 and 250,000 euros per deal, and external fairness audits for Annex III systems range from 35,000 to 120,000 euros annually.
The penalty architecture in Article 99 is the enforcement teeth behind these obligations, as breaching the prohibitions in Article 5 carries the heaviest fine, which can reach up to 35 million euros or 7 percent of worldwide annual turnover, whichever is higher. Breaching the operator and notified-body obligations, the bulk of the high-risk regime, attracts up to 15 million euros or 3 percent. Supplying incorrect or misleading information to authorities reaches 7.5 million or 1 percent. For SMEs, the calculation flips: the upper bound is the lower of the two figures rather than the higher, a deliberate proportionality concession negotiated by the European Digital SME Alliance during the trilogue. Union institutions, which can also be sanctioned by the European Data Protection Supervisor under Article 100, are capped at 1.5 million.
General-Purpose AI: The Threshold That Caught the Frontier
The general-purpose AI provisions are the part of the Regulation that most clearly bears the marks of having been written under pressure. The original 2021 Commission proposal contained no specific rules for foundation models. By the time the trilogue concluded in December 2023, ChatGPT had been live for thirteen months, GPT-4 had launched, and the political question of whether to regulate “foundation models” had become the single most contested issue in the negotiations. France, with Mistral AI freshly funded, pushed against ex-ante regulation. Germany and Italy initially aligned with the French position. The Spanish Council Presidency, holding the chair during the decisive months, brokered the compromise that became Articles 51 to 55.
The compromise rests on a numerical threshold that no other major jurisdiction has tried to use as a regulatory trigger. Article 51(2) provides that a general-purpose AI model is presumed to have “high impact capabilities” and therefore, constitute a model with systemic risk, when the cumulative compute used for its training exceeds 10²⁵ floating-point operations. Below that line, providers face transparency and copyright obligations under Article 53: technical documentation kept up to date, downstream-provider information packages, a copyright policy that respects machine-readable opt-outs under the Copyright Directive’s Article 4(3), and a publicly available summary of training content following a template the AI Office published in July 2025. Above the line, Article 55 piles on additional obligations, such as state-of-the-art model evaluations, systemic-risk assessment and mitigation, serious incident reporting to the AI Office, and adequate cybersecurity for both models and physical infrastructure.
The threshold is, by any technical measure, arbitrary. The Center for Security and Emerging Technology at Georgetown notes that the choice of 10²⁵ FLOP captures roughly five to fifteen frontier providers worldwide, including OpenAI’s o3, Anthropic’s Claude 4 Opus, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and a small number of comparable models. Berlin-based AI policy researchers argued during the negotiations that the threshold was too high to capture genuinely systemic models and that capability-based criteria, not raw compute, should drive the designation. The Act provides for both, as Article 52 allows the Commission to designate models above the threshold even when providers contest the designation, and to designate models below the threshold ex officio if a qualified alert from the Scientific Panel suggests equivalent capability.
What followed the Article 53 deadline of 2 August 2025 was the most consequential test of EU technology regulation since the GDPR. The Commission’s strategy was to convert the Act’s high-level obligations into concrete commitments through a voluntary Code of Practice, drafted through a multi-stakeholder process involving roughly a thousand participants and chaired by independent experts. Signing the Code grants providers a presumption of conformity with Articles 53 and 55, reducing administrative burden and securing collaborative engagement with the AI Office during the grace period until full enforcement on 2 August 2026.
By the time the Commission and AI Board endorsed the Code as adequate in early August 2025, twenty-six companies had signed the full document, including every major Western frontier-AI provider with one exception. Amazon, Anthropic, Google, IBM, Microsoft, Mistral AI and OpenAI had committed to all three chapters, Transparency, Copyright, and Safety and Security. So had a constellation of European specialists: Aleph Alpha, Almawave, Cohere, Fastweb, Pleias. xAI took the unusual step of signing only the Safety and Security Chapter, meaning Elon Musk’s company will need to demonstrate compliance with the transparency and copyright obligations through alternative means, a stance the AI Office has publicly noted will trigger heavier individual scrutiny. Major Chinese providers, like Alibaba, Baidu and DeepSeek, have not signed at all, though they continue to make their models available in European markets.
And then there was Meta. On 18 July 2025, Joel Kaplan, Meta’s Chief Global Affairs Officer, posted on LinkedIn that the company had “carefully reviewed” the Code and would not sign. Kaplan argued that the Code introduced “a number of legal uncertainties for model developers” and contained “measures which go far beyond the scope of the AI Act.” The specific complaints centred on the copyright chapter, which requires signatories to respect web-crawling opt-outs and avoid pirated content in training data, and on the transparency requirement to publish training-data summaries. Meta had been the first major lab to make its frontier model weights publicly available and the most aggressive industry voice against ex-ante regulation. Its refusal was the closest the AI Act came to a public ratchet between EU regulatory ambition and US industry pushback.
The Commission’s response to Meta has been measured. Non-signatories are not in breach of the AI Act, as the Code is voluntary, and providers may demonstrate compliance through alternative means. But the Commission’s GPAI Guidelines, published on 18 July 2025, are unusually frank about what those alternative means look like in practice: non-signatories will face a “larger number of requests for information” and more detailed substantive scrutiny by the AI Office. The structural choice, in other words, is between voluntary compliance with one transparency framework or mandatory case-by-case engagement with another. The legal certainty that Kaplan invoked as a reason not to sign was, on the Commission’s analysis, exactly what signing would deliver.
Behind the public posture, both sides have reasons to want the dispute managed quietly. Meta cannot afford a regulatory rupture with the world’s third-largest economic bloc, particularly with the Digital Services Act fines of up to 6 percent of global turnover hanging over its consumer-facing services. The Commission cannot afford a major US frontier-model provider to demonstrate that the Code is an enforceable choice rather than the gravitational default either. As of April, 2026, there are still no signs of Meta having signed the Code.
Standards That Did Not Arrive On Time
Article 113 of the Regulation set out a deceptively clean schedule, as it expected the act to enter into force on 1 August 2024. Prohibitions and AI literacy obligations applicable from 2 February 2025. General-purpose AI obligations applicable from 2 August 2025. General application, including the high-risk regime, from 2 August 2026. High-risk obligations for AI embedded in regulated products from 2 August 2027. The text was binding, the dates were fixed, and the political signal was unambiguously pointing towards the Brussels Effect would arrive on schedule.
-
1 Aug 2024
Entry into force
Twenty days after publication -
2 Feb 2025
Prohibitions live
Article 5 bans, AI literacy -
2 Aug 2025
GPAI rules
Articles 53–55 apply -
2 Aug 2026
General application
High-risk obligations originally due -
2 Aug 2027
Annex I systems
AI in regulated products
It did not survive contact with implementation. The harmonised standards required to operationalise the high-risk obligations are produced by CEN-CENELEC, the European committees for electrotechnical and standardisation work, under a formal Commission mandate. The Joint Technical Committee 21, charged with producing those standards, missed its autumn 2025 deadline. CEN-CENELEC communications now indicate that the full standards may not be available before December 2026, or four months after the original general application date. Without harmonised standards, providers cannot benefit from the presumption of conformity that Article 40 grants to systems built to those standards, and they face the alternative of demonstrating compliance directly against the Regulation’s own articles, which is a far more legally exposed position.
Member State implementation has been similarly slow. The 2 August 2025 deadline for designating national competent authorities, the market-surveillance bodies that will actually enforce the Act in each Member State, came and went with multiple Member States missing it. Hanane Taidi of the TIC Council, which represents independent conformity-assessment bodies, told Euronews that “many Member States missed the August 2025 deadline” and that the conformity-assessment infrastructure required to certify high-risk systems was not in place. The Commission itself missed its 2 February 2026 statutory deadline for guidelines on Article 6, the operative provision that determines whether an AI system is high-risk in the first place, and which the Act itself treated as a precondition for orderly application.
Against this backdrop, on 19 November 2025, the Commission published its proposal for a Digital Omnibus on AI, formally Proposal COM(2025) 836 amending Regulations (EU) 2024/1689 and (EU) 2018/1139. The Omnibus does not reopen the substantive obligations of the AI Act. It does something more limited and, depending on perspective, more troubling, by postponing the application of the high-risk regime by up to sixteen months. Under the Commission’s original proposal, the high-risk obligations would apply only once the Commission confirmed that the supporting standards, common specifications and guidelines were available, with hard backstops of 2 December 2027 for stand-alone Annex III systems and 2 August 2028 for AI embedded in Annex I products.
The European Parliament and the Council both rejected the conditional mechanism in favour of fixed dates. On 13 March 2026 the Council adopted its general approach, aligning with Parliament on the December 2027 and August 2028 deadlines. On 18 March, the IMCO and LIBE committees adopted their joint report. On 26 March, the Parliament plenary endorsed the negotiating position by 569 votes in favour, 45 against, and 23 abstentions, which was a rare near-consensus that masked the sharp disagreement on the substance. The Cypriot Council Presidency is targeting political agreement at the second trilogue meeting, which is scheduled for April 28th.
Defenders of the postponement, including Parliament co-rapporteur Arba Kokalari (EPP, Sweden), have argued that fixed deadlines provide “predictability and legal certainty” in the absence of the standards that were supposed to make compliance practical. Co-rapporteur Michael McNamara (Renew, Ireland), the Parliament’s lead Civil Liberties negotiator, told Tech Policy Press that the alternative (shifting AI governance into sectoral product laws) risked being “deregulatory rather than simplifying”. Critics have been less measured. Agustín Reyna, Director General of the European Consumer Organisation BEUC, said the proposal “can only be read as deregulation almost to the exclusive benefit of Big Tech”. Laura Caroli, an AI Act negotiator and former adviser to Parliament co-rapporteur Brando Benifei, told the IAPP that the delay “undermines confidence in the Act itself” and creates the impression that the Commission has prioritised the Omnibus over meeting the Act’s own deadlines.
The Corporate Europe Observatory and LobbyControl, in a joint report cited by Tech Policy Press, found that 69% of Commission meetings on AI policy in 2025 were with business groups and only 16 percent with civil society organisations. One Commission consultation on AI rules, the report noted, included only eleven or twelve participants, all from industry except for one civil-society organisation. The Omnibus, on this account, is less a technical correction than the codification of an industry preference.
The Loopholes Civil Society Cannot Forget
The Omnibus debate has revived attention to a set of gaps in the AI Act that civil-society organisations flagged during the original trilogue and that have not been closed since. Three of these are particularly consequential:
The first is the national-security exemption in Article 2(3). The provision is, on its face, a recognition of Member State competence under the Treaty on the European Union: the EU does not have authority over national security, and any AI Act provision that purported to bind that competence would be vulnerable to challenge before the Court of Justice. In practice, the Centre for Democracy and Technology argues, the exemption operates as a legal route around the Act’s most consequential restrictions. A law-enforcement authority that wishes to deploy real-time biometric identification beyond the narrow Article 5 derogations may, in principle, recharacterise the activity as serving national-security purposes and thereby remove it from the Regulation’s scope altogether. The European Digital Rights network notes that the same logic applies to AI systems used in counter-terrorism, intelligence gathering, and border surveillance.
The second is the dual standard for migration and law-enforcement authorities. Article 49 requires high-risk AI systems to be registered in a public EU database. But for systems used by law-enforcement and migration authorities, registration goes into a non-public section of the database. Affected persons, civil society and journalists have no way of knowing where these systems are deployed, against whom, or with what documented impact. The fundamental rights impact assessment under Article 27, which would otherwise be a transparency mechanism, does not have to be published in those contexts. The Platform for International Cooperation on Undocumented Migrants (PICUM) and the Access Now coalition have called this a codification of impunity.
The third is the high-risk classification carve-out in Article 6(3), introduced in the trilogue. A provider may self-assess that an AI system listed in Annex III “does not pose a significant risk of harm” if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing human judgement, or performs a preparatory task. Self-assessments must be documented and registered, but the determination is made by the provider, not by an independent body. Industry surveys cited by the AI Act tracker website found that of 113 EU AI start-ups surveyed, 33 percent believed their systems would be classified as high-risk, against the Commission’s own estimate of 5 to 15 percent. The gap is partly definitional and partly reflective of how broadly Article 6(3) can be read. ECNL has argued that the provision creates “significant scope for AI developers to argue their systems are not high risk”, undermining the protections the high-risk regime was designed to deliver.
None of these issues is on the Omnibus negotiating table. The Parliament position adds the nudifier prohibition and clarifies the conditions for processing special categories of data for bias detection. It does not narrow the national-security exemption, expand transparency for law-enforcement uses, or tighten the Article 6(3) carve-out. The Center for Democracy and Technology Europe, in its formal feedback to negotiators on 16 April 2026, expressed particular concern that the Parliament’s proposal to move Annex I Section A under Section B (thereby moving high-risk AI systems covered by sectoral product-safety law largely outside the AI Act’s direct scope) would dilute the accountability mechanisms that deal specifically with risks to fundamental rights.
Key Takeaways
Where this leaves the AI Act, twenty months after enactment and one day before the political trilogue, comes down to four observations.
First, the Brussels Effect on AI is real but uneven. Twenty-six major AI providers have signed the Code of Practice, including every leading Western frontier-model lab apart from Meta. The Code’s Safety and Security Chapter has effectively defined what state-of-the-art frontier-model risk management looks like; the Center for Security and Emerging Technology has called it “the best framework of its kind in the world.” That is the regulation-by-default outcome the Commission wanted. But the Effect has not extended uniformly: Chinese frontier providers continue to operate in Europe outside the Code, and Meta has demonstrated that voluntary signature is genuinely contestable. The Brussels Effect on this Regulation will be measured by how the AI Office handles non-signatories from August 2026 onwards and by whether the harmonised standards arrive in time to make the underlying obligations practically enforceable.
Second, the postponement is a structural concession, rather than a technical adjustment. The Commission’s explanation for the Omnibus is that the supporting standards and guidelines are not ready. That is true. But the deeper reason is that the Act was drafted on the assumption that European standardisation bodies, Member State enforcement infrastructure, Commission guidance documents and the AI Office’s technical capacity would all be in place by mid-2026. None of those assumptions held. The Omnibus formalises a sixteen-month grace period that was already, in practical terms, going to occur de facto. What the legislative postponement adds is the legal certainty that providers will not face enforcement during the grace period and the political signal that the European Commission of 2026, unlike the European Commission of 2022, sees regulatory burden as a competitiveness problem to be managed rather than an asset to be defended.
Third, the loopholes are now structural. The national-security exemption, the dual transparency standard for migration and law enforcement, and the Article 6(3) high-risk carve-out are not bugs in the AI Act, but features that civil society lost on, that Member States insisted on, and that the Omnibus has not reopened. Any narrowing of these provisions would now require a fresh legislative initiative against a Commission and Council majority that has explicitly framed the next eighteen months as a simplification phase. The fundamental-rights critique of the Act, valid on its merits, has lost the institutional moment in which it might have been translated into binding text.
Fourth, the AI Office is the institution to watch. Most of the consequential decisions about how the AI Act actually operates, like which models are designated as carrying systemic risk, what “alternative adequate means” will be accepted from non-signatories, or how the Code of Practice evolves as new frontier capabilities emerge, will be made not by the co-legislators but by the AI Office inside the Directorate-General for Communications Networks, Content and Technology. The Office is being built in real time against an enforcement deadline that has just been pushed back. Its technical capacity, its independence from Commission political instructions, and its willingness to test the Article 5 prohibitions in actual cases will determine whether the AI Act becomes the global benchmark its drafters intended or the European Union’s most expensive regulatory mistake.
What happens at today’s trilogue is, in the end, the easier question. With the Council and Parliament aligned on fixed dates, political agreement is overwhelmingly likely. Adoption before 2 August 2026 is on the schedule the Cypriot Presidency built. The harder question is whether the AI Act, as amended, can still produce the European model for AI governance that justified writing 144 pages and amending eight other regulations to get it. The first signal will be whether the harmonised standards arrive by December 2026. If they do not, the Omnibus will not be the last modification to the act.
