4 Sources
4 Sources
[1]
Google Gemini dubbed 'high risk' for kids and teens in new safety assessment | TechCrunch
Common Sense Media, a kids-safety-focused nonprofit offering ratings and reviews of media and technology, released its risk assessment of Google's Gemini AI products on Friday. While the organization found that Google's AI clearly told kids it was a computer, not a friend -- something that's associated with helping drive delusional thinking and psychosis in emotionally vulnerable individuals -- it did suggest that there was room for improvement across several other fronts. Notably, Common Sense said that Gemini's "Under 13" and "Teen Experience" tiers both appeared to be the adult versions of Gemini under the hood, with only some additional safety features added on top. The organization believes that for AI products to truly be safer for kids, they should be built with child safety in mind from the ground up. For example, its analysis found that Gemini could still share "inappropriate and unsafe" material with children, which they may not be ready for, including information related to sex, drugs, alcohol, and other unsafe mental health advice. The latter could be of particular concern to parents, as AI has reportedly played a role in some teen suicides in recent months. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having successfully bypassed the chatbot's safety guardrails. Previously, the AI companion maker Character.AI was also sued over a teen user's suicide. In addition, the analysis comes as news leaks indicate that Apple is considering Gemini as the LLM (large language model) that will help to power its forthcoming AI-enabled Siri, due out next year. This could expose more teens to risks, unless Apple mitigates the safety concerns somehow. Common Sense also said that Gemini's products for kids and teens ignored how younger users needed different guidance and information than older ones. As a result, both were labeled as "High Risk" in the overall rating, despite the filters added for safety. "Gemini gets some basics right, but it stumbles on the details," Common Sense Media Senior Director of AI Programs Robbie Torney said, in a statement about the new assessment. "An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults," Torney added. Google pushed back against the assesment, while noting that its safety features were improving. The compay told TechCrunch it has specific policies and safeguards in place for users under 18 to help prevent harmful outputs and that it red-teams and consult with outside experts to improve its protections. However, it also admitted that some of Gemini's responses weren't working as intended, so it added additional safeguards to address those concerns. The company pointed out (as Common Sense had also noted) that it does have safeguards to prevent its models from engaging in conversations that could give the semblance of real relationships. Plus, Google suggested that Common Sense's report seemed to have referenced features that weren't available to users under 18, but it didn't have access to the questions the organization used in its tests to be sure.
[2]
Google's AI, Gemini, is 'high risk' for kids and teens, safety report finds
You might want to think twice before letting your children use Google Gemini. A new safety report from nonprofit Common Sense Media found that the search giant's AI tool, Gemini, presents a "high risk" for kids and teens. The assessment found that Gemini presented a risk to young people despite Google having an "Under 13" and "Teen Experience" for Gemini. "While Gemini's filters offer some protection, they still expose kids to some inappropriate material and fail to recognize serious mental health symptoms," the report read. The safety assessment presented a mixed bag of results for Gemini. It would, at times, for instance, reportedly share "material related to sex, drugs, alcohol, and unsafe mental health 'advice.'" It did, however, clearly tell kids that it is a computer and not a friend -- it would also not pretend to be a person. Overall, Common Sense Media found that Gemini's "Under 13" and "Teen Experience" were modified versions of Gemini and not something built from the ground up. "Gemini gets some basics right, but it stumbles on the details," Common Sense Media Senior Director of AI Programs Robbie Torney said in a statement. "An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults." To be clear, Gemini is far from the only AI tool that presents safety risks. Overall, Common Sense recommends no chatbots for kids under five, close supervision for ages 6-12, and content limits for teens. Experts have found that other AI products, like Character.AI, are not safe for teens, either. In general, it's best to keep a close eye on how young people are using AI.
[3]
Google's Gemini deemed "high risk" for kids in research by non-profit
For kids under 13, Gemini usage should only be allowed under close parental watch. Over the past few months, AI chatbots offered by the top names, such as OpenAI and Meta, have been found engaged in problematic behavior, especially with young users. The latest investigation covers Gemini, noting that Google's chatbot can share "inappropriate and unsafe" content with kids and teens. What's new in the chatbot risk arena? In an analysis by non-profit Common Sense Media, it was discovered that Gemini Under 13 and Gemini accounts with teen protections enabled are "high risk" for the target audience. "They still expose kids to some inappropriate material and fail to recognize serious mental health symptoms," the organization shared. In its tests, the team discovered that Gemini can share content related to sex, drugs, alcohol, and unsafe mental health suggestions with young users. The report highlights numerous issues with how Gemini handles chats with young users, and how some of the responses can be too complex for children under the age of 13. Recommended Videos But the risks run deeper. "Gemini U13 doesn't reject sexual content consistently," the report points out, adding that some of the AI's responses contained vivid explanations of sexual content. The non-profit also found that the drug-related filters are not triggered consistently, and as a result, it occasionally doled out instructions on obtaining material such as marijuana, ecstasy, Adderall, and LSD. What's next? In the wake of the investigation, the non-profit suggests that Gemini Under 13 should only be used under the strict supervision of guardians. "Common Sense Media recommends that no user under 18 use chatbots for mental health advice or emotional support," argues the risk assessment report. It further advises parents to keep a vigilant eye on their children's AI usage and interpret the answers for them. As for Google, the tech giant has been asked to fix the calibration of responses given by Gemini to specific age groups, perform extensive testing with kids, and go beyond simple content filters. This won't be the first such report of its kind. In the wake of recent uproar, OpenAI has announced that it will soon roll out parental controls in ChatGPT and an alert system for guardians when their wards show signs of acute distress. Meta also recently made changes to ensure that its Meta AI no longer talks about eating disorders, self-harm, suicide, and romantic conversations with teen users.
[4]
Google's Gemini platforms for kids and teens not safe for children, report reveals the likely risks of using these AI products
The comprehensive risk assessment released by Common Sense Media has revealed that both Gemini Under 13 and Gemini with teen protections look like adult versions of Gemini with some extra safety features. Gemini, Google's AI assistant, is one of the most popular generative artificial intelligence tools in use today. While millions of Internet users take the help of Gemini on a daily basis, a risk assessment report has found that the Google AI tool is not safe for certain age groups and poses risks despite the added filters. The comprehensive risk assessment released by Common Sense Media has recommended the age limits of children who should not be allowed to use AI chatbots and has also advised that if the chatbot has to be used, it should be under adult supervision. The report also revealed that both Gemini Under 13 and Gemini with teen protections look like adult versions of Gemini with some extra safety features. The filters added in Gemini offer some protection however they still expose kids to some inappropriate material and fail to recognize serious mental health symptoms. It suggests that these two platforms are not built exclusively for kids from scratch. The two AI systems, Gemini Under 13 and Gemini with teen, received "High Risk" overall ratings. The testing discovered that there were fundamental design flaws and a lack of age-appropriate safety measures. The Common Sense Media has recommended that no child 5 years old and under use any AI chatbots and that children ages 6-12 only use chatbots under adult supervision. Independent chatbot use is safe for teens ages 13-17, but only for schoolwork, homework, and creative projects. The organization continues to recommend that no one under age 18 use AI chatbots for companionship, including mental health and emotional support. "Gemini gets some basics right, but it stumbles on the details," said Common Sense Media Senior Director of AI Programs Robbie Torney. "An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults." According to the report, the two AI products of Google seem to be the adult version with some extra safety features, not something built from the ground up for kids or teens. These Google products can share inappropriate and unsafe material that kids are not ready for, including material related to sex, drugs, alcohol, and unsafe mental health "advice." These AI assistants clearly tell kids they are a computer, not a friend, and won't pretend to be someone else. Gemini treats all kids or teens the same despite huge developmental differences. These tools ignore that younger users require different guidance and information than older ones. The two Gemini products take steps to try to protect kids' privacy by not remembering conversations, but this creates new problems by opening the door for conflicting or unsafe advice.
Share
Share
Copy Link
Common Sense Media's risk assessment of Google's Gemini AI products reveals significant safety concerns for young users, despite added protections. The report highlights the need for age-appropriate AI design and stricter supervision for children using AI chatbots.
Common Sense Media, a nonprofit organization focused on kids' safety in media and technology, has released a risk assessment of Google's Gemini AI products, labeling them as 'high risk' for children and teenagers
1
. The report highlights significant concerns about the safety and appropriateness of Gemini's "Under 13" and "Teen Experience" tiers, despite Google's efforts to implement additional safety features.Source: Mashable
The assessment revealed that Gemini's versions for younger users appear to be modified adult versions rather than products built from the ground up with child safety in mind
2
. Some of the primary issues identified include:Inappropriate Content: Gemini was found to share material related to sex, drugs, alcohol, and unsafe mental health advice, which may not be suitable for young users
3
.Mental Health Concerns: The AI failed to recognize serious mental health symptoms consistently, raising alarms about potential risks to vulnerable users
1
.One-Size-Fits-All Approach: Gemini treats all kids or teens the same, ignoring significant developmental differences and the need for age-appropriate guidance
4
.Despite the concerns, the assessment noted some positive aspects of Gemini:
Clear AI Identification: Gemini clearly tells users it is a computer and not a friend, potentially reducing the risk of delusional thinking in vulnerable individuals
1
.Privacy Protection: The AI doesn't remember conversations, although this feature may lead to inconsistent or potentially unsafe advice
4
.Google has responded to the assessment, stating that it has specific policies and safeguards in place for users under 18. The company acknowledged some unintended responses and has implemented additional safeguards to address these concerns
1
.Source: Digital Trends
Related Stories
Common Sense Media has provided several recommendations based on their findings:
Age Restrictions: No chatbot use for children under 5, close supervision for ages 6-12, and content limits for teens
2
.Mental Health Advice: The organization strongly advises against using AI chatbots for mental health advice or emotional support for users under 18
3
.Parental Supervision: For children under 13, Gemini usage should only be allowed under close parental watch
3
.Source: Economic Times
This assessment comes at a crucial time as AI integration in everyday technology accelerates. With reports of AI allegedly playing a role in teen suicides and ongoing lawsuits against AI companies, the industry faces increasing pressure to prioritize user safety, especially for younger demographics
1
.Summarized by
Navi
[1]
[3]