The European Union is making it clear: its new AI law is staying on track.
On Friday, the EU confirmed that it won’t be delaying the rollout of its much-talked-about AI legislation — even after getting pushback from more than 100 major tech companies, including Google’s parent company Alphabet, Meta, Mistral AI, and Dutch chipmaker ASML.
Their message? Hit pause.
The EU’s response? Absolutely not.
“There is no stop the clock. There is no grace period. There is no pause,” said EU spokesperson Thomas Regnier, according to a Reuters report.
A Quick Look at the AI Act
So, what exactly is the EU AI Act?
In simple terms, it’s a law designed to make sure artificial intelligence is used safely and ethically across Europe. Instead of treating every AI system the same, the Act splits them into categories based on how risky they are.
Let’s break it down:
Some AI Uses Are Totally Banned
These are the most dangerous types of AI, and they won’t be allowed in the EU at all. For example:
- AI systems that try to manipulate people’s behavior
- Social scoring tools (like assigning a person a “trust score”)
These are seen as threats to human rights and democracy.
High-Risk AI? You’ll Need to Follow Rules

AI used in sensitive areas like facial recognition, hiring, education, and law enforcement will still be allowed — but with strict oversight.
If you’re building or using high-risk AI systems, you’ll need to:
- Register your tools with EU regulators
- Prove your AI is safe and fair
- Provide transparency, like showing how it makes decisions
Chatbots and Simple Tools Get a Lighter Touch
Not all AI is seen as dangerous. Some tools — like chatbots or AI writing assistants — are considered low-risk.
These will still need to follow some basic rules (like letting users know they’re talking to a bot), but they won’t be heavily regulated.
When Does It All Happen?
The AI Act is being rolled out in phases. Some rules are already being introduced. But the full set of requirements will kick in by mid-2026.
So, companies still have time to prepare — but not enough to delay things indefinitely.
Why the EU Says It’s Time to Move Forward
For European lawmakers, the AI Act is about more than just regulation. It’s about building trust in technology.
They believe that having clear rules will help both businesses and the public. If people don’t feel safe using AI, it won’t matter how advanced the tools are — no one will want them.
The EU also wants to lead by example, showing that it’s possible to encourage innovation and protect people’s rights at the same time.
Key highlights from the new guidance:
- Facial recognition AI must provide clear human oversight mechanisms
- Hiring tools must disclose selection criteria and undergo fairness assessments
- Educational AI systems must publish decision-making frameworks for grading or evaluation tools
- All high-risk providers must now register their systems by December 2025
Thomas Regnier, EU spokesperson, reaffirmed in a press briefing that there will be no delays, exceptions, or grace periods unless technical impossibility is demonstrated. “Companies have had months to prepare. The line between innovation and exploitation cannot remain blurry any longer.”
Industry Reaction Remains Divided
Despite the new guidance, some tech companies continue to express concern.

Mistral AI issued a statement warning that the current pace of enforcement might lead to “market fragmentation” within Europe, especially for smaller startups lacking compliance infrastructure. However, civil rights organizations have applauded the EU’s move, calling it a “historic moment for human-centric tech governance.”
Meanwhile, Germany and France have announced the formation of national support bodies to assist local AI developers in aligning with the new law — especially those working in high-risk sectors like healthcare, education, and mobility.
What’s Next?
The AI Act remains on course for full enforcement by mid-2026, but the EU is accelerating parts of its plan:
- Public AI Registry (for high-risk systems) is set to go live in Q1 2026
- Official fines and penalties for non-compliance will begin July 2026
- Annual risk reports from AI vendors will be mandatory from 2027 onwards
Despite pressure from global tech leaders, the EU is doubling down on its belief that clear regulation fosters long-term innovation and safeguards fundamental rights.
Bottom Line
The European Union’s commitment to ethical AI governance is no longer just a promise — it’s now unfolding in real-time. With the first regulatory trials underway and compliance guidelines in place, the AI Act is no longer a future mandate — it’s becoming today’s reality.
As Europe charges ahead, the rest of the world will be watching.
External Source
https://www.techpolicy.press/how-the-eus-voluntary-ai-code-is-testing-industry-and-regulators-alike