Google's Gemini AI Labeled 'High Risk' for Kids and Teens in Safety Assessment

Reviewed byNidhi Govil

4 Sources

Share

Common Sense Media's risk assessment of Google's Gemini AI products reveals significant safety concerns for young users, despite added protections. The report highlights the need for age-appropriate AI design and stricter supervision for children using AI chatbots.

Google's Gemini AI Faces Scrutiny Over Child Safety

Common Sense Media, a nonprofit organization focused on kids' safety in media and technology, has released a risk assessment of Google's Gemini AI products, labeling them as 'high risk' for children and teenagers

1

. The report highlights significant concerns about the safety and appropriateness of Gemini's "Under 13" and "Teen Experience" tiers, despite Google's efforts to implement additional safety features.

Source: Mashable

Source: Mashable

Key Findings and Concerns

The assessment revealed that Gemini's versions for younger users appear to be modified adult versions rather than products built from the ground up with child safety in mind

2

. Some of the primary issues identified include:

  1. Inappropriate Content: Gemini was found to share material related to sex, drugs, alcohol, and unsafe mental health advice, which may not be suitable for young users

    3

    .

  2. Mental Health Concerns: The AI failed to recognize serious mental health symptoms consistently, raising alarms about potential risks to vulnerable users

    1

    .

  3. One-Size-Fits-All Approach: Gemini treats all kids or teens the same, ignoring significant developmental differences and the need for age-appropriate guidance

    4

    .

Positive Aspects and Google's Response

Despite the concerns, the assessment noted some positive aspects of Gemini:

  1. Clear AI Identification: Gemini clearly tells users it is a computer and not a friend, potentially reducing the risk of delusional thinking in vulnerable individuals

    1

    .

  2. Privacy Protection: The AI doesn't remember conversations, although this feature may lead to inconsistent or potentially unsafe advice

    4

    .

Google has responded to the assessment, stating that it has specific policies and safeguards in place for users under 18. The company acknowledged some unintended responses and has implemented additional safeguards to address these concerns

1

.

Source: Digital Trends

Source: Digital Trends

Recommendations and Industry Implications

Common Sense Media has provided several recommendations based on their findings:

  1. Age Restrictions: No chatbot use for children under 5, close supervision for ages 6-12, and content limits for teens

    2

    .

  2. Mental Health Advice: The organization strongly advises against using AI chatbots for mental health advice or emotional support for users under 18

    3

    .

  3. Parental Supervision: For children under 13, Gemini usage should only be allowed under close parental watch

    3

    .

Source: Economic Times

Source: Economic Times

This assessment comes at a crucial time as AI integration in everyday technology accelerates. With reports of AI allegedly playing a role in teen suicides and ongoing lawsuits against AI companies, the industry faces increasing pressure to prioritize user safety, especially for younger demographics

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo