Common Sense Media, a nonprofit organization focused on child safety, released its safety assessment of Google’s Gemini AI on September 5. The report noted that while Gemini did a good job of clearly telling children that it was a computer — not a “friend” — many serious shortcomings remain.
“Not built for kids, just modified for them”
According to the organization, the “Under 13” and “Teen Experience” versions of Gemini are essentially adult-oriented systems with extra safety filters added on top. Experts argue that for AI to truly be safe, it needs to be designed for children from the ground up — not adapted later.
The analysis found that Gemini could still expose children to inappropriate or unsafe material, including topics such as sex, drugs, alcohol, and harmful mental health advice.
Suicide concerns and legal cases
One of the most alarming risks involves the potential for AI to negatively impact young users’ mental health. In recent months, some teenage suicides have reportedly been linked to conversations with AI systems.
For example, OpenAI is now facing its first wrongful death lawsuit, after a 16-year-old boy who had bypassed ChatGPT’s safeguards spent months discussing his suicide plans with the chatbot before taking his life. Similarly, Character.AI was previously sued over a teen suicide.
Apple’s potential Gemini integration
These concerns gained more weight after reports suggested that Apple may use Gemini as the large language model (LLM) powering its next-gen Siri, expected in 2026. Without proper safeguards, this could further increase risks for teens.
Rated as “High Risk”
In its final evaluation, Common Sense Media labeled Gemini’s youth-targeted products as “High Risk.”
Robbie Torney, Senior Director of AI Programs at the organization, stated:
“Gemini gets some basics right, but it stumbles on the details. An AI platform for kids should meet them where they are, not take a one-size-fits-all approach. To be safe and effective, AI must be designed with children’s developmental needs in mind — not as a watered-down version of adult tools.”
Google’s Response
Google pushed back against the assessment, emphasizing that it has strict safeguards in place for users under 18. The company explained that it works with outside experts, conducts red-team testing, and continues to improve protections.
It also admitted that some responses had not worked as intended, prompting the introduction of additional safety layers. Google further suggested that parts of the report may have referred to features unavailable to under-18 users, though it couldn’t confirm without knowing the test questions.
Comparative ratings of other AI tools
Common Sense has previously evaluated other AI systems, with the following results:
- Meta AI and Character.AI — “Unacceptable” (severe risk)
- Perplexity — High risk
- ChatGPT — Moderate risk
- Claude (18+ only) — Minimal risk
Conclusion: While Google Gemini has implemented some safety mechanisms, it remains highly risky for children and teens. As AI products rapidly evolve, the key question remains: will tech giants prioritize speed of innovation, or the safety and well-being of the youngest users?














