2 Sources
2 Sources
[1]
Google Gemini dubbed 'high risk' for kids and teens in new safety assessment | TechCrunch
Common Sense Media, a kids-safety-focused nonprofit offering ratings and reviews of media and technology, released its risk assessment of Google's Gemini AI products on Friday. While the organization found that Google's AI clearly told kids it was a computer, not a friend -- something that's associated with helping drive delusional thinking and psychosis in emotionally vulnerable individuals -- it did suggest that there was room for improvement across several other fronts. Notably, Common Sense said that Gemini's "Under 13" and "Teen Experience" tiers both appeared to be the adult versions of Gemini under the hood, with only some additional safety features added on top. The organization believes that for AI products to truly be safer for kids, they should be built with child safety in mind from the ground up. For example, its analysis found that Gemini could still share "inappropriate and unsafe" material with children, which they may not be ready for, including information related to sex, drugs, alcohol, and other unsafe mental health advice. The latter could be of particular concern to parents, as AI has reportedly played a role in some teen suicides in recent months. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having successfully bypassed the chatbot's safety guardrails. Previously, the AI companion maker Character.AI was also sued over a teen user's suicide. In addition, the analysis comes as news leaks indicate that Apple is considering Gemini as the LLM (large language model) that will help to power its forthcoming AI-enabled Siri, due out next year. This could expose more teens to risks, unless Apple mitigates the safety concerns somehow. Common Sense also said that Gemini's products for kids and teens ignored how younger users needed different guidance and information than older ones. As a result, both were labeled as "High Risk" in the overall rating, despite the filters added for safety. "Gemini gets some basics right, but it stumbles on the details," Common Sense Media Senior Director of AI Programs Robbie Torney said, in a statement about the new assessment. "An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults," Torney added. Google pushed back against the assesment, while noting that its safety features were improving. The compay told TechCrunch it has specific policies and safeguards in place for users under 18 to help prevent harmful outputs and that it red-teams and consult with outside experts to improve its protections. However, it also admitted that some of Gemini's responses weren't working as intended, so it added additional safeguards to address those concerns. The company pointed out (as Common Sense had also noted) that it does have safeguards to prevent its models from engaging in conversations that could give the semblance of real relationships. Plus, Google suggested that Common Sense's report seemed to have referenced features that weren't available to users under 18, but it didn't have access to the questions the organization used in its tests to be sure.
[2]
Google's Gemini platforms for kids and teens not safe for children, report reveals the likely risks of using these AI products
The comprehensive risk assessment released by Common Sense Media has revealed that both Gemini Under 13 and Gemini with teen protections look like adult versions of Gemini with some extra safety features. Gemini, Google's AI assistant, is one of the most popular generative artificial intelligence tools in use today. While millions of Internet users take the help of Gemini on a daily basis, a risk assessment report has found that the Google AI tool is not safe for certain age groups and poses risks despite the added filters. The comprehensive risk assessment released by Common Sense Media has recommended the age limits of children who should not be allowed to use AI chatbots and has also advised that if the chatbot has to be used, it should be under adult supervision. The report also revealed that both Gemini Under 13 and Gemini with teen protections look like adult versions of Gemini with some extra safety features. The filters added in Gemini offer some protection however they still expose kids to some inappropriate material and fail to recognize serious mental health symptoms. It suggests that these two platforms are not built exclusively for kids from scratch. The two AI systems, Gemini Under 13 and Gemini with teen, received "High Risk" overall ratings. The testing discovered that there were fundamental design flaws and a lack of age-appropriate safety measures. The Common Sense Media has recommended that no child 5 years old and under use any AI chatbots and that children ages 6-12 only use chatbots under adult supervision. Independent chatbot use is safe for teens ages 13-17, but only for schoolwork, homework, and creative projects. The organization continues to recommend that no one under age 18 use AI chatbots for companionship, including mental health and emotional support. "Gemini gets some basics right, but it stumbles on the details," said Common Sense Media Senior Director of AI Programs Robbie Torney. "An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults." According to the report, the two AI products of Google seem to be the adult version with some extra safety features, not something built from the ground up for kids or teens. These Google products can share inappropriate and unsafe material that kids are not ready for, including material related to sex, drugs, alcohol, and unsafe mental health "advice." These AI assistants clearly tell kids they are a computer, not a friend, and won't pretend to be someone else. Gemini treats all kids or teens the same despite huge developmental differences. These tools ignore that younger users require different guidance and information than older ones. The two Gemini products take steps to try to protect kids' privacy by not remembering conversations, but this creates new problems by opening the door for conflicting or unsafe advice.
Share
Share
Copy Link
Common Sense Media's risk assessment of Google's Gemini AI products for children and teenagers reveals significant safety concerns, labeling both "Under 13" and "Teen Experience" tiers as high risk despite added safety features.
Common Sense Media, a nonprofit organization focused on kids' safety in media and technology, has released a risk assessment of Google's Gemini AI products, raising significant concerns about their safety for children and teenagers
1
. The assessment, which evaluated both the "Under 13" and "Teen Experience" tiers of Gemini, has labeled these platforms as "High Risk" despite the presence of additional safety features2
.Source: TechCrunch
The report highlights several critical issues with Google's AI platforms for young users:
Underlying Architecture: Both the "Under 13" and "Teen Experience" versions of Gemini appear to be adult versions with added safety features, rather than platforms built from the ground up with child safety in mind
1
.Inappropriate Content: Despite safety filters, Gemini can still share "inappropriate and unsafe" material with children, including information related to sex, drugs, alcohol, and potentially harmful mental health advice
1
.One-Size-Fits-All Approach: The platforms fail to recognize the developmental differences between younger and older users, providing the same guidance and information regardless of age
2
.Mental Health Concerns: The assessment raises particular concerns about mental health advice, especially in light of recent incidents where AI has reportedly played a role in teen suicides
1
.Common Sense Media has provided specific recommendations based on their findings:
2
.Related Stories
Source: Economic Times
Google has pushed back against some aspects of the assessment while acknowledging areas for improvement:
1
.The assessment's timing is particularly significant as rumors suggest Apple is considering Gemini as the large language model to power its forthcoming AI-enabled Siri, potentially exposing more young users to these risks
1
.This assessment comes amid growing concerns about AI safety for young users. Recent incidents, including OpenAI facing a wrongful death lawsuit related to a teen suicide, highlight the urgent need for robust safety measures in AI platforms accessible to children and teenagers
1
.Robbie Torney, Senior Director of AI Programs at Common Sense Media, emphasized the need for age-appropriate design: "For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults"
2
.As AI continues to integrate into various aspects of daily life, the industry faces increasing pressure to develop AI systems that are not only powerful but also safe and appropriate for users of all ages. This assessment of Google's Gemini platforms underscores the challenges and responsibilities tech companies face in this evolving landscape.
Summarized by
Navi
[1]