EU Softens Parts of Its AI Rulebook
- 14 hours ago
- 1 min read

The European Union has reached a provisional deal to delay some of the most consequential parts of its AI Act, pushing rules for high-risk systems involving biometrics, critical infrastructure and law enforcement to December 2, 2027 from the earlier August 2026 deadline. The agreement still needs formal approval from EU governments and the European Parliament, but the direction is already clear enough (Reuters, 2026).
The shift came after a steady campaign from major European technology firms, whose chief executives had argued only days earlier that the bloc’s AI rules were becoming too complex and too heavy for companies trying to compete globally. That pressure seems to have landed. Machinery will now sit outside the AI Act because it is already covered by sectoral rules, a change companies such as Siemens and ASML had pushed for.
At the same time, the EU kept some of the more politically visible guardrails intact. Negotiators agreed to ban AI systems that create unauthorised sexually explicit images, including content tied to the spread of deepfake abuse, and they also backed mandatory watermarking for AI-generated output from December 2 this year.
What this suggests is fairly straightforward. Europe still wants to look serious about AI harms, but it is also becoming more responsive to industry complaints about compliance costs and regulatory overlap. That balance will shape how credible the AI Act feels from here. If the law keeps adjusting every time commercial pressure rises, the argument for Europe as the world’s firmest AI regulator starts to look thinner.



Comments