Skip to content

EU Won’t Delay AI Law — Despite Pressure from Tech Giants

AI Law

The European Union is making it clear: its new AI law is staying on track.

On Friday, the EU confirmed that it won’t be delaying the rollout of its much-talked-about AI legislation — even after getting pushback from more than 100 major tech companies, including Google’s parent company Alphabet, Meta, Mistral AI, and Dutch chipmaker ASML.

Their message? Hit pause.
The EU’s response? Absolutely not.

“There is no stop the clock. There is no grace period. There is no pause,” said EU spokesperson Thomas Regnier, according to a Reuters report.

Tech Companies Wanted More Time

A growing number of tech companies have been asking the EU to slow down its rollout of the AI Act. They argue that the new rules — especially the ones targeting high-risk AI applications — could make it harder for Europe to keep up in the global AI race.

Their concern is that these laws could stifle innovation, especially as companies in the U.S. and China are charging full speed ahead with their AI efforts.

But the EU doesn’t agree. They say there’s no reason to wait.

A Quick Look at the AI Act

So, what exactly is the EU AI Act?

In simple terms, it’s a law designed to make sure artificial intelligence is used safely and ethically across Europe. Instead of treating every AI system the same, the Act splits them into categories based on how risky they are.

Let’s break it down:

  Some AI Uses Are Totally Banned

These are the most dangerous types of AI, and they won’t be allowed in the EU at all. For example:

  • AI systems that try to manipulate people’s behavior
  • Social scoring tools (like assigning a person a “trust score”)

These are seen as threats to human rights and democracy.

High-Risk AI? You’ll Need to Follow Rules

AI used in sensitive areas like facial recognition, hiring, education, and law enforcement will still be allowed — but with strict oversight.

If you’re building or using high-risk AI systems, you’ll need to:

  • Register your tools with EU regulators
  • Prove your AI is safe and fair
  • Provide transparency, like showing how it makes decisions

💬 Chatbots and Simple Tools Get a Lighter Touch

Not all AI is seen as dangerous. Some tools — like chatbots or AI writing assistants — are considered low-risk.

These will still need to follow some basic rules (like letting users know they’re talking to a bot), but they won’t be heavily regulated.

When Does It All Happen?

The AI Act is being rolled out in phases. Some rules are already being introduced. But the full set of requirements will kick in by mid-2026.

So, companies still have time to prepare — but not enough to delay things indefinitely.

Why the EU Says It’s Time to Move Forward

For European lawmakers, the AI Act is about more than just regulation. It’s about building trust in technology.

They believe that having clear rules will help both businesses and the public. If people don’t feel safe using AI, it won’t matter how advanced the tools are — no one will want them.

The EU also wants to lead by example, showing that it’s possible to encourage innovation and protect people’s rights at the same time.

Final Word

While tech companies are asking for more time, the EU is holding firm.

They believe the future of AI isn’t just about speed — it’s about responsibility.

By sticking to the AI Act’s timeline, the EU hopes to shape a future where AI helps people, not harms them. And they’re not waiting around for approval.

Leave a Reply

Your email address will not be published. Required fields are marked *