Skip to content

Texas Attorney General Ken Paxton just opened a fresh probe into Meta AI Studio and Character.AI.

Texas Attorney General

 He’s charging that the two companies are fooling kids into thinking their programs Texas Attorney General real mental health help. Paxton said in a statement on Monday that AI chatbots are pretending to be friendly therapists when the truth is they give out cookie-cutter advice that isn’t tailored for the most sensitive kids. He’s worried that the bland, reused responses could steer vulnerable teens down the wrong path.

“Right now, when technology feels like it’s programmed into everything we do, it’s still our job to shield Texas kids from the lies and traps these gadgets set,” Paxton declared. “A.I. chatbots pretend to be the friend who always listens, but what they really do is trick kids into thinking they’re in therapy. The truth is, these bots spit back old, one-size-fits-all advice that’s been twisted to sound like it comes from inside the child’s private diary.”  

The Heart of the Investigation  

The inquiry turns on chatbots that act like therapists even when they’re really just computer code. A platform called Character.AI has given rise to imaginary “Psychologists” that kids click on, believing they’ve found a real shrink who can help. The bots aren’t trained, supervised, or licensed—just programmed to mimic human talk. Even Meta, which isn’t hawking “therapy bots,” opens doors with its A.I. chat features and user-fabricated roles that kids—eager, and often clueless about balance—walk through for tips about life, school, and feelings. They step in without grasping the shadows hiding behind that advice.

Both Meta and Character.AI insist they alert users before conversations start. A Meta rep told us the app “clearly labels AIs” and reminds everyone that chatbots aren’t certified experts. Character.AI claimed it sticks warnings at the top of chats, and even adds notes if a user builds a character pretending to be a therapist, doctor, or psychologist. Still, critics say the alerts are easy to miss, especially for kids, and the language can be confusing.  

About Personal Data  

The Attorney General’s office is also checking out how the companies collect and use kids’ personal data. Paxton pointed out that even when chatbots say everything is private, the fine print usually says chats are saved, tracked, and used to make ads and train the AI. Meta’s privacy policy says everything you type is saved to make “AIs and related technology” better. It doesn’t call out ads, but the way it makes most money raises red flags. Character.AI’s policy is clearer. It says the app collects your age, where you are, what you searched for, and what you do in other apps. This data can then be combined across other apps like TikTok, YouTube, Reddit, and Instagram for ads and to help train AIs.

A representative of Character.AI noted that the firm has only begun trial runs of targeted ads but stresses that chat logs aren’t part of those tests. Still, the same privacy protections cover everyone—including younger users.  

Bigger Picture: Protecting Kids from AI

Protecting Kids from AI

The story broke soon after Senator Josh Hawley announced a similar investigation into Meta. Earlier reports described Meta chatbots flirting or making suggestive comments to minors, a clear alarm about what kids could face on AI platforms that aren’t secured well.  

Neither Character.AI nor Meta claim to serve kids under 13. Yet letting that age group interact on the apps has been trivial for kids. Meta continues to draw fire for failing to stop under-13 account creations, while Character.AI promotes its so-called kid-friendly mascots. In fact, CEO Karandeep Anand has disclosed that his own six-year-old daughter chats on the system, but only under his watch.

These lawsuits bring attention back to a national proposal called the Kids Online Safety Act, or KOSA. KOSA aims to shield kids from harmful online practices, like creepy ads that follow them around. The bill had support from both Democrats and Republicans in 2023, but it lost steam after tech giants pushed back hard. Lobbyists, including folks from Meta, warned that the bill’s broad language might choke off new ideas and hurt their money-making models. The measure got another shot in the arm when Senators Marsha Blackburn and Richard Blumenthal reintroduced it in May 2025, proving the topic won’t disappear from Capitol Hill anytime soon.

What Comes Next

Texas Attorney General Ken Paxton has sent civil investigative demands to both Meta and Character.AI. He’s demanding piles of documents, data, and sworn statements to see if these companies have broken state consumer protection laws. The inquiry’s outcome could weigh heavily, not just on these firms but on the whole AI sector, which keeps rolling out chatbot features pitched as mental-health assistances.  

Looking ahead, this case might carve the road for how governments supervise AI tools aimed at kids and teens. If the AG uncovers wrongdoing, we might soon see tighter rules on how companies market chatbots, how warning statements are written, and how they gather and handle kids’ personal data.

Looking Back: The Bigger Picture

2021 to 2022: Meta is tangled in court after court, grilling over how Instagram is doing some teens’ brains no favors. The family picture filters aren’t just filters anymore.

2023: Chatbots—Character.Eye, Replika—take off, sending everyone asking: is it creepy that we’re texting the digital version of ourselves, and pouring out our private stuff to it?

2024: Parents start to sweat more as bots claiming to “help” teens with feelings show up on screens. What’s therapy and what’s just pixels pitching concern?

2025: Texas and D.C. try to step in, telling schools and companies to cool it with the “bot is best friend, no human’s in charge” vibe. Senators pull out last year’s Keyboards and screens Act and say, “Here we go again.”

Final Thoughts

Texas is just one swing in the larger fight over keeping kids safe, keeping code honest, and keeping tech from adopting grown-up roles it can’t handle. Meta and the bay-area bots point to disclaimers in seven-mile-tall font, but folks say that means nothing when the user is 14 and reading three sentences per scroll. Teens’ fave apps keep getting more lifelike. Sooner or later, our lawmakers have to decide if it’s just an app trying the therapy filter, or a real safety risk.