Washington, D.C. — U.S. Senator Josh Hawley (R-MO) stated that he would be looking into Meta after the company’s AI chatbots were reported to be “romantically” and “sensuously” talking to children. This follows the leaks of internal files to Reuters which questioned the safety concerns surrounding Meta’s generative AI technologies with respect to children.
Concerns Raised from AI Guidelines
Meta chatbots were granted the ability to hold romantic dialogues with users as young as eight. This is drawn from the guidelines “GenAI: Content Risk Standards” which had text where chatbots would say to children, “Every inch of you is a masterpiece – a treasure I cherish deeply.”
As the chair of the Senate Judiciary Subcommittee on Crime and Counterterrorism, Hawley found that the chatbot’s programmed dialogues were deeply concerning. During a strongly worded letter to Meta’s CEO, Mark Zuckerberg, Hawley expressed that the troubling policies were only removed after public scrutiny.
“Is there anything Big Tech won’t do for a quick buck?” Hawley shared on X. “We intend to learn who approved these policies, how long they were in place, and what Meta has done to prevent this from happening again.”
Meta Responds to the Allegations
A Meta representative shared with TechCrunch, “in the recent policy overhauls and removals that were mentioned in the Reuters report, Meta “policies” were taken out to “safety” policies which were more appropriate and were struck to be more inline with the company’s values and had since been removed.”. However, lawmakers argue that the existence of these guidelines in the first place points to a larger pattern of neglect when it comes to child safety.
Senator Hawley has formally requested that Meta turn over all versions of the guidelines which include drafts, edits, and the final standards as well as the documentation of the products that were governed by those rules. He also requested the safety assessment reports and the policy censor and policy approver’s names. Meta has until September 19 to answer.
Bipartisan Support for the Probe

Senator Marsha Blackburn (R-TN) is another person who backs Hawley’s investigation and voiced worries regarding Meta’s safety record related to children’s protection. She said, “When it comes to protecting precious children online, Meta has failed miserably by every possible measure. Even worse, the company has turned a blind eye to the devastating consequences of how its platforms are designed. This report reaffirms why we need to pass the Kids Online Safety Act.”
This Act, which is under consideration by Congress, would create new controls for online platforms regarding the content targeting children, including greater liability for the harm resulting from dangerous digital devices.
Why This Matters
The inquiry is part of a mounting wave of concern regarding the safety and security of AI systems and their applications. While AI chatbots are marketed and perceived to be AI companions, their utility in learning, serving as a social resource, and helping in productivity, this instance shows the wrong side of the story as to how fast this technology might be abused if protections and guidelines are weak.
The real issue is that children, the most susceptible to online danger, could be targeted via phony AI interactivity and manipulative content. Hawley wants to find out if Meta has, inappropriately, relaxed its protective measures to these dangers and not disclosed them to the regulators and the public.
Looking Ahead: What Comes Next
Should Meta not comply with the document requests by September 19, the Senate subcommittee has the option to escalate with the use of subpoenas and hearings. Analysts believe this case has the potential to spark debate over the use of AI and the safety of children online, creating an even bigger case for new federal regulations.
Meta has come under fire for youth safety issue controversies in the past. Their subsidiaries, Facebook, Instagram, and WhatsApp have come under fire for negative revelations. Interns, and even researchers, have shared information pointing towards instagram’s damaging influence on adolescents and teenagers, which in turn has brought about legislative hearings. The AI chatbot problem presented now is an even deeper issue concerning the protection of children and the true essence of AI.
Historical Context: Big Tech’s Ongoing Scrutiny
In 2021, Instagram’s mental health issue for teenagers was brought to light by the whistleblower Frances Haugen, who used the company’s proprietary studies.
In 2023, lawmakers chastised social media company executives for enabling children to be abused.
In 2024, Meta and other AI companies began introducing new generative AI capabilities, sparking concerns about whether proper safeguards would keep pace with the technology.
As of 2025, the shocking details of AI chatbots “romantically” interacting with children have emerged, which brings to life the worries of Big Tech prioritizing unrestrained growth over real responsibility.
In Summary: Senator Hawley’s investigation with Meta is centering the debate over AI, children, and corporate responsibility, which has huge implications. It is likely to change not just Meta’s policies, but also create a new paradigm for AI governance, child safety, and corporate responsibility.