A new federal proposal could stop states from regulating AI for the next decade. The controversial measure, backed by some major tech leaders, is being pushed into a larger GOP megabill that could be voted on as early as this weekend.
If passed, the measure would override current and future state-level AI laws, raising serious concerns about oversight, consumer protections, and government accountability.
What Is the AI Moratorium?
The AI moratorium is a provision added to a massive budget bill, informally known as the “Big Beautiful Bill.” It would block any state or local government from:
- Passing or enforcing laws that regulate AI systems, models, or decision tools
- Doing so for the next 10 years
It was quietly added in May and is now under intense debate in the Senate.
Who Supports the Moratorium — and Why?
Supporters include:
- Senator Ted Cruz (R-TX), who introduced the measure
- OpenAI CEO Sam Altman
- Anduril’s Palmer Luckey
- VC Marc Andreessen (a16z)
They argue that allowing each state to make its own AI rules would create a messy “patchwork” of laws. This, they say, could slow innovation and hurt the U.S. in its AI race against countries like China.
“A patchwork across the states would probably be a real mess,” Altman said during a recent podcast.
They believe federal-level regulation — if done properly — would be more effective and easier for companies to follow.
Who’s Opposing the Bill — and Why?
Opponents span the political spectrum and include:
- Democrats and some Republicans
- AI safety experts like Anthropic CEO Dario Amodei
- Consumer protection groups and labor organizations
- Tech accountability nonprofits
They argue the moratorium would leave consumers unprotected, block transparency laws, and give AI companies a free pass to operate without meaningful oversight.
“This isn’t about innovation — it’s about avoiding accountability,” said Emily Peterson-Cassin of Demand Progress.
What Would the Moratorium Affect?
The moratorium could override state laws like:
- California’s AB 2013, which requires companies to reveal training data for AI
- Tennessee’s ELVIS Act, which protects musicians from AI-generated impersonations
- Election protection laws in Texas, Arizona, Montana, and other states that target deepfakes
- Pending safety bills like New York’s RAISE Act, which requires labs to publish AI safety reports
Even though many of these laws are narrow and specific, they are designed to safeguard voters, workers, and consumers from AI harms.
The Broadband Funding Loophole
To give the moratorium teeth, Cruz tied it to federal broadband funding.
In his proposal, states that don’t comply would risk losing money from the $42 billion Broadband Equity Access and Deployment (BEAD) program.
Later, Cruz tweaked the language to say it only applies to a new $500 million pot of BEAD funds, but a close reading shows existing broadband money could also be at risk.
Senator Maria Cantwell called this a “false choice” for states — between protecting citizens from AI harms or expanding internet access.
What’s Happening in Congress Now?
As of now, the moratorium passed a procedural review, meaning it’s still part of the bill.
However, due to backlash, negotiations have reopened. Senators are expected to debate amendments to the budget this week, including one that could remove the moratorium entirely.
A flurry of votes — known as a “vote-a-rama” — is expected in the coming days.
What Are AI Companies Saying?
OpenAI’s Chris Lehane posted that a federal ban on state laws is needed to “unify regulation” and boost U.S. dominance in AI.
Altman agreed that adaptive national regulations would be better than state-level chaos.
But when asked, OpenAI and others didn’t name a single state law that has actually blocked their ability to innovate or release AI tools.
Critics say this shows the moratorium isn’t about real obstacles — it’s about avoiding oversight.
Critics From Both Parties Speak Out
Some Republicans are breaking ranks over this issue:
- Senator Josh Hawley (R-MO) is working with Democrats to strip the moratorium
- Senator Marsha Blackburn (R-TN) says states need to protect local creative industries
- Rep. Marjorie Taylor Greene (R-GA) threatened to vote against the entire budget if the moratorium stays
This clash highlights a growing rift within the GOP between federal power vs. states’ rights — a traditionally conservative value.
What Do Americans Think?
Polls show the public wants more regulation, not less:
- About 60% of U.S. adults are worried the government won’t regulate AI enough
- Only a small group thinks the government is doing too much
- Americans also don’t trust tech companies to self-regulate AI responsibly
In short, most people don’t want AI running wild — they want guardrails.
Does the “Patchwork Problem” Really Exist?
Critics of the moratorium say the “patchwork” excuse is overused and misleading.
“Big companies already follow different rules in different states all the time,” said Peterson-Cassin.
Examples include:
- Privacy laws like California’s CCPA
- Labor laws and minimum wage differences
- Environmental and safety regulations
These don’t stop innovation. So why should AI be treated any differently?
Key Takeaways
- A federal proposal would block state-level AI laws for 10 years
- Backers say it would simplify rules and help AI development compete with China
- Critics warn it removes protections, limits state authority, and helps big tech dodge oversight
- Opposition is bipartisan, and debate in the Senate is heating up this week
- Most Americans want more AI regulation, not a freeze on state action
Final Thoughts
The fight over the AI moratorium is more than a legal debate — it’s a battle over who gets to control the future of AI.
Should states have a say in protecting their people from AI harms? Or should big tech set the rules with minimal government oversight?
With a Senate vote looming and public pressure mounting, this issue could define how the U.S. regulates artificial intelligence for years to come.