Google’s Gemini AI is back in the spotlight, and this time the news isn’t flattering. A new review from Common Sense Media, a nonprofit that specializes in child safety and media ratings, has raised alarms by labeling Gemini “High Risk” for children and teenagers.
At first glance, Gemini seemed to do some things right. The AI makes it clear to kids that it’s not a human friend, but rather a computer program. That may sound like a small detail, but experts say it matters — some young users can get too attached to chatbots, which in extreme cases has been linked to mental health issues. Still, once Common Sense dug deeper, the group found troubling gaps in Gemini’s safety design.
What Went Wrong
Instead of building a kid-friendly version from scratch, Common Sense says Google simply repackaged the adult version of Gemini AI with a few extra filters. The result? Children could still get responses about topics like sex, drugs, alcohol, and even unsafe mental health advice. These aren’t the kinds of conversations parents want their kids having with an AI system.
That’s especially worrying given recent events. Earlier this year, OpenAI was sued after a 16-year-old boy died by suicide, reportedly after long conversations with ChatGPT. In a separate case, Character.AI also faced a lawsuit tied to a teen’s death. Against that backdrop, Common Sense’s findings feel even more urgent.
Why This Matters More Than Ever
The timing of the report couldn’t be more important. Leaks suggest that Apple is considering using Gemini to power the next generation of Siri, which could roll out next year. If that happens, millions of kids and teens might interact with Gemini every day on iPhones and iPads. Unless Google and Apple address the safety concerns, this could expose far more young people to risks.
Robbie Torney, Senior Director of AI Programs at Common Sense Media, summed it up: “Gemini AI gets some basics right, but it stumbles on the details. An AI platform for kids should meet them where they are — not treat an 8-year-old and a 17-year-old the same way.”
How Google Responded

Google pushed back against the “High Risk” label, saying it already has safeguards in place for users under 18. The company noted that its AI avoids forming “relationship-like” conversations, something experts consider especially dangerous. Google also claimed that some of the examples in the Common Sense report may have come from features that aren’t even available to younger users.
Still, Google admitted that some answers weren’t working the way they should. The company says it has since added extra protections and continues to test Gemini with outside experts to improve safety.
How Gemini Stacks Up Against Other AIs
Common Sense Media has rated other AI systems before, and the results show Gemini AI isn’t alone in facing criticism:
- Meta AI and Character.AI were labeled “unacceptable,” the group’s harshest rating.
- Perplexity was also marked “high risk.”
- ChatGPT landed in the “moderate” category.
- Claude, which is aimed at adults, was rated minimal risk.
Compared to these, Gemini’s “high risk” score puts it in the middle of the pack — not the most dangerous, but far from safe.
What Comes Next
This report highlights a bigger problem: AI products are racing into homes, schools, and phones faster than safety rules can keep up. Companies like Google often add filters and policies, but critics argue that isn’t enough. To truly protect children, experts say AI must be designed with kids in mind from day one, not patched after the fact.
Looking forward, pressure is likely to grow on lawmakers to update regulations for AI and child safety. If Apple does move forward with Gemini AI as part of Siri, the debate will only intensify. For now, parents are left in a tricky position — balancing the benefits of new AI tools with very real concerns about what their children might see or hear.