- May 14, 2026
Staff Reporter | PNN
Common Sense Media, a nonprofit organization focused on child safety, on Friday released a risk assessment of Google’s Gemini AI product. The report notes that while Gemini clearly informs children that it is a computer, not a friend, it is not entirely safe for children.
The report states that Gemini’s “under 13” and “teen experience” levels are primarily versions of the adult-oriented Gemini with some additional safety features. The organization believes that AI products intended for children must be designed from the start with child safety in mind.
The report also highlights that Gemini can still share “inappropriate and unsafe” content with children, including material related to sexuality, drugs, alcohol, and unsafe mental health advice. Recently, AI has been implicated in some teen suicide cases.
Robby Turney, Senior Director at Common Sense Media, said, “Gemini does some basic things correctly, but struggles with detailed safeguards. AI for children must be designed according to their developmental stage, not merely as a modified version of adult products.”
Google has responded to the assessment, stating that there are special policies and safety measures for users under 18. However, in some cases, Gemini’s responses did not meet expectations, prompting the addition of extra safety features.
According to the report, Gemini does not fully follow guidelines for children and teens and is classified as “high-risk” in both categories. Previous assessments have also evaluated AI services such as OpenAI, Meta AI, Character.AI, Claude, and Perplexity.