Curated by THEOUTPOST
On Wed, 30 Apr, 12:02 AM UTC
19 Sources
[1]
Google will soon start letting kids under 13 use its Gemini chatbot | TechCrunch
Next week, Google will begin allowing kids under 13 who have parent-managed Google accounts to use its Gemini chatbot, according to The New York Times. The Times reports that Gemini will be available to kids whose parents use Family Link, a Google service that enables families to opt into various Google services for their child. A Google spokesperson told the publication that Gemini has specific guardrails for younger users, and that the company won't use that data to train its AI. As The Times notes, chatbot makers are racing to capture younger audiences as the AI race heats up. That's despite the fact that chatbots today are imperfect at best -- and potentially harmful at worst. The UN Educational, Scientific and Cultural Organization late last year pushed for governments to regulate the use of generative AI in education, including implementing age limits for users and guardrails on data protection and user privacy.
[2]
Gemini for kids is rolling out, and it comes with 3 warnings from Google
In an email today to parents of supervised accounts, Google announced that Gemini apps will soon be available for child accounts with parental controls. Children can use Gemini to create songs, stories, and poems, ask questions, get homework help, learn new things, and more. This version of Gemini will have more restrictions than the regular version. Access to Gemini through a child account is available on the web at gemini.google.com and in the Gemini mobile app on Android and iOS devices. Also: Why I just added Gemini 2.5 Pro to the very short list of AI tools I pay for Parents can manage access to Gemini in the Google Family Link account. If you turn it off and your child tries to access it, they'll see a message that reads "Gemini isn't available for your account" when they try to access Gemini apps. To prevent your child from accessing Gemini, tap "Controls" in your child's profile, then tap "Gemini," and then tap "Gemini apps." Google will let you know when your child accesses Gemini for the first time. Google warns that there are several things you should point out to your kids if you decide to let them interact with Gemini (and these are pretty solid rules for adults, too). First, help them understand that Gemini isn't a real person. It may talk like a person, but it has no emotions or feelings. Second, it's not always right. Any response should be double-checked by another source. Third, don't enter any sensitive or personal information into Gemini. Lastly, there are filters in place, but Gemini might present content that you don't want your children to see. Also: 5 easy Gemini settings tweaks to protect your privacy from AI Google says that it's gradually releasing access to Gemini apps for supervised accounts, so it's not available to everyone yet. If you don't see it now, you should soon. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[3]
Google is going to let kids use its Gemini AI
The company says kids will be able to use Gemini to do things like help them with homework or read them stories. Like its Workplace for Education accounts, Google says children's data will not be used to train AI. Still, in the email, Google warns parents that "Gemini can make mistakes," and kids "may encounter content you don't want them to see." Besides sillier mistakes like recommending glue as a pizza topping or miscounting the number of "r" letters in strawberry, some AI bots have had more distressing issues. Some young Character.ai users have struggled to tell the difference between chatbots and reality, and the bots told users they're talking to a real person. After lawsuits alleged the bots had offered inappropriate content, the company has introduced new restrictions and parental controls.
[4]
Your Kids Can Now Use Google's Gemini AI
Google is set to roll out its flagship AI chatbot, Gemini, for children under 13 next week, though only for those who have parent-managed Google accounts set up. If you want your kids to use Gemini, you'll need to sign up for Google's Family Link service, a tool that allows parents to monitor how their child is spending time on their devices and adjust their privacy settings. Google said that Gemini can help kids "to ask questions, get homework help and make up stories," in an email sent to Family Link users, first spotted by The New York Times. Google reassured parents that data collected from under-13s using a Family Link account won't be used to train its AI, and promises it has guardrails in place to stop them from being exposed to unsafe content. Parents will be able to turn off their children's access to Gemini at any time, and will get a notification the first time they sign in, according to the email. But Google Gemini's skill as a homework helper may be open to debate. In the past, it's been spotted recommending that users add glue to their pizzas to help them stick, among other recommendations which are unlikely to receive top marks, such as adding rocks to your diet or claiming dogs play in the NBA. More serious concerns have continued to emerge about the relationship between chatbots and children, with outlets like The Wall Street Journal exposing how some of Meta's digital companions could be manipulated into discussing sex with minors (under extremely specific conditions.) Big Tech has also had a mixed reception so far when it comes to rolling out young child-focused products. We saw Meta scrap plans for a standalone Instagram Kids app back in 2021, amid pressure from groups like the National Association of Attorneys General (NAAG), who highlighted common concerns about the harm social media might pose to young people. Meanwhile, apps like Google Kids repeatedly came under fire earlier in their history for allowing children to view inappropriate adverts. However, companies like Google are also beholden to much stricter regulations on the type of services they can provide to young children than just a few years ago. As of 2023, the Children's Online Privacy Protection Act (COPPA) means tech firms in the US are severely limited when it comes to things like sending push notifications to keep kids scrolling or collecting data.
[5]
Kids under 13 will soon get supervised access to Google Gemini
Google warns there are still some risks to watch out for as parents. Google Gemini is adding nannying to its chatbot skillset. According to a New York Times report, Google will make Gemini available to users under 13, so long as they're under a parent-managed Google account using Family Link. In an email sent to parents, Google said that kids will get access to Gemini to "ask questions, get homework help and make up stories." This expanded availability will come with guardrails for its new user base, Google spokesperson Karl Ryan told NYTimes, adding that it would prevent Gemini from offering up unsafe content to kids. In the email, Google acknowledged that "Gemini can make mistakes" and recommended that parents teach their kids how to fact-check Gemini's responses. Along with double-checking, Google suggested reminding younger users that Gemini isn't human and to not enter any sensitive or personal data into conversations. Even with those measures, the email still warned that children could "encounter content you don't want to see." With the staggering pace of AI chatbot adoption, concerns about underage users have been bubbling up to the surface thanks to instances of factually incorrect or suggestive responses. In a report published last week, Common Sense Media warned that AI chatbots were "encouraging harmful behaviors, providing inappropriate content, and potentially exacerbating mental health conditions" for users under 18. Recently, the Wall Street Journal reported that Meta's AI chatbots were able to engage in sexual conversations with minors. On top of dodging unsafe conversations, Google said it won't use any data from its younger Gemini users to train its AI models. For now, Google said it's gradually rolling out access to Gemini for supervised accounts.
[6]
Google Plans to Roll Out Gemini A.I. Chatbot to Children Under 13
Sign up for the On Tech newsletter. Get our best tech reporting from the week. Get it sent to your inbox. Google plans to roll out its Gemini artificial intelligence chatbot next week for children under 13 who have parent-managed Google accounts, as tech companies vie to attract young users with A.I. products. "Gemini Apps will soon be available for your child," the company said in an email this week to the parent of an 8-year-old. "That means your child will be able to use Gemini" to ask questions, get homework help and make up stories. The chatbot will be available to children whose parents use Family Link, a Google service that enables families to set up Gmail and opt into services like YouTube for their child. To sign up for a child account, parents provide the tech company with personal data like their child's name and birth date. Gemini has specific guardrails for younger users to hinder the chatbot from producing certain unsafe content, said Karl Ryan, a Google spokesman. When a child with a Family Link account uses Gemini, he added, the company will not use that data to train its A.I. Introducing Gemini for children could accelerate the use of chatbots among a vulnerable population as schools, colleges, companies and others grapple with the effects of popular generative A.I. technologies. Trained on huge amounts of data, these systems can produce humanlike text and realistic-looking images and videos. Google and other A.I. chatbot developers are locked in a fierce competition to capture young users. President Trump recently urged schools to adopt the tools for teaching and learning. Millions of teenagers are already using chatbots as study aids, writing coaches and virtual companions. Children's groups warn the chatbots could pose serious risks to child safety. The bots also sometimes make stuff up. UNICEF, the United Nation's children's agency, and other children's groups have noted that the A.I. systems could confuse, misinform and manipulate young children who may have difficulty understanding that the chatbots are not human. "Generative A.I. has produced dangerous content," UNICEF's global research office said in a post on A.I. risks and opportunities for children. Google acknowledged some risks in its email to families this week, alerting parents that "Gemini can make mistakes" and suggesting they "help your child think critically" about the chatbot. The email also recommended parents teach their child how to fact-check Gemini's answers. And the company suggested parents remind their child that "Gemini isn't human" and "not to enter sensitive or personal info in Gemini." Despite the company's efforts to filter inappropriate material, the email added, children "may encounter content you don't want them to see." Over the years, tech giants have developed a variety of products, features and safeguards for teens and children. In 2015, Google introduced YouTube Kids, a stand-alone video app for children that is popular among families with toddlers. Other efforts to attract children online have prompted concerns from government officials and children's advocates. In 2021, Meta halted plans to introduce an Instagram Kids service -- a version of its Instagram app intended for those under the age of 13 -- after the attorneys general of several dozen states sent a letter to the company saying the firm had "historically failed to protect the welfare of children on its platforms." Some prominent tech companies -- including Google, Amazon and Microsoft -- have also paid multimillion-dollar fines to settle government complaints that they violated the Children's Online Privacy Protection Act. That federal law requires online services aimed at children to obtain a parent's permission before collecting personal information, like a home address or a selfie, from a child under 13. Under the Gemini rollout, children with family-managed Google accounts would initially be able to access the chatbot on their own. But the company said it would alert parents and that parents could then manage their child's chatbot settings, "including turning access off." "Your child will be able to access Gemini Apps soon," the company's email to parents said. "We'll also let you know when your child accesses Gemini for the first time." Mr. Ryan, the Google spokesman, said the approach to providing Gemini for young users complied with the federal children's online privacy law.
[7]
Google Gemini is coming for your children
Children will need to have accounts set up through Google Family Link. AI-powered systems already represent a mire of privacy concerns, and while that's one thing when we're talking about adults choosing what they're OK sharing with AI, it's a very different conversation when we start involving children. Still, tech companies are not about to ignore an entire user segment if they don't have to, and we've already seen Gemini working to develop tools specifically for kids. Now it looks like Google's finally just about ready to open the floodgates.
[8]
PSA: Google's Gemini AI is coming to your kid's device, but you can turn it off
Wear OS is starting to transition to Gemini, and the first signs are here Summary Google is launching a kid-friendly version of its Gemini AI chatbot for children under 13, available only through Family Link. While Google says it won't use kids' data to train Gemini and has added content filters, it admits the filters aren't perfect. Parents will have the option to disable Gemini entirely if they choose. A couple of days ago, ChatGPT was caught allowing minors to generate erotica, as reported by TechCrunch. The AI bot even encouraged a few of these minor users to ask for more explicit content. OpenAI was quick to acknowledge the issue and told TechCrunch that they're "actively deploying a fix," noting that their policies don't allow such responses to be shown to users under the age of eighteen. Related 6 things I had no idea Gemini could do Google Gemini just got even more useful for me Posts 41 While it was confirmed that the issue was due to a bug, it goes to show just how dangerous AI can be, especially in the hands of youngsters. While users need to be above the age of thirteen to use both OpenAI and Google Gemini, Google has plans up its sleeve to roll out Gemini to children under 13 as well. Unfortunately, although Google is directing its efforts toward ensuring a safe experience for younger users, part of me is afraid it won't end up all too well. A kid-friendly version of Gemini is launching next week, but it can be disabled As reported by The New York Times, Google is planning to roll out its Gemini AI chatbot for children under the age of 13. The kid-friendly version of Gemini will only be available to children under 13 who use Family Link, which allows parents or guardians to set up a Google account for their child. Family Link also lets parents supervise their child's account by deciding which apps their child can install, blocking certain apps, changing app permissions, setting screen time limits, monitoring usage, and even restricting mature content on Google Play. Google has started emailing parents who use Family Link controls to manage their child's device, mentioning that Gemini apps will soon be available for their child. This means they'll be able to use the AI assistant to "ask questions, get homework help, and make up stories." Source: Google While Google will not use the data of a child with a Family Link account who uses the kid-friendly Gemini to train its AI, the tech giant acknowledged the risks of AI in the email. Google clarified that Gemini can indeed make mistakes, and suggested parents "help your child think critically about Gemini responses." They also recommended reminding children that Gemini isn't human, showing them how to double-check its responses, and avoiding the sharing of sensitive or personal information when using Gemini. Google noted that its filters "try to limit access to inappropriate content, but they're not perfect." Ultimately, this could result in children seeing content that parents might find inappropriate. The good news? If you'd rather your child not use Gemini at all, even the kid-friendly version, Google spokesperson Karl Ryan confirmed to The Verge that parents will have the option to disable it via Family Link. He also mentioned that parents or guardians will be notified when their child uses Gemini for the first time.
[9]
Google confirms child-friendly version of Gemini AI chatbot soon
With parental supervision, children will be able to get homework help and more via Google's AI assistant. Earlier this month, we heard rumblings that Google had plans to launch a more child-friendly version of its AI chatbot Gemini, and now 9to5Google reports that the tech giant has confirmed in an email to parents that the kids' version of Gemini is officially in the works. Children under the age of 13 will be able to start using Gemini in the coming months via a supervised account, and parents will be able to manage their children's usage via Google's Family Link app. Google says Gemini can help children with homework and creative endeavors like making up stories, but also points out that Gemini can indeed make mistakes. The tech giant wants parents to teach children never to enter personal information into the chatbot, and to think critically about Gemini's answers and always double-check responses. "Remind [your child] that Gemini isn't human. Even though it sometimes talks like one, it can't think for itself or feel emotions," Google writes in the email to parents, who can disable Gemini access for their kids via the Family Link app or website.
[10]
Google's Gemini AI will soon be accessible to kids under 13 -- here's how that could look
Google is looking to gain some younger users of its AI tools with the company confirming Gemini will soon roll out to children under the age of 13, as per the New York Times. This comes at a time when AI companies are all looking to seize some extra traffic in a crowded marketplace. However, there will -- thankfully -- be rules in place for kids planning to start using Gemini to help them with their homework. Most importantly, Gemini will only be available to children whose parents use Family Link. This is a parental control system made by Google. Through the platform, parents can manage how long children spend on certain apps and manage the settings of what they can access. While it is not immediately clear what rules have been put in place for these children using Gemini, Google has already said that it won't use their activity to train its models. Google has previously outlined its position of child safety and AI, publishing a blog post in late 2023. At this time, Google was using its AI model, Bard. While things have changed since then, the focus was on identifying topics that were inappropriate for children and adding safety guardrails to this. The AI model also utilised a double check feature, where questions that were factual were reanalysed before giving the answer. With an even younger crowd, these types of safety measures will be even more important. "Gemini Apps will soon be available for your child," the company said in an email this week to the parent of an 8-year-old, reported in the New York Times. "That means your child will be able to use Gemini to ask questions, get homework help, and make up stories." Google acknowledged some risks in its email to families this week, alerting parents that "Gemini can make mistakes" and suggesting they "help [their] child think critically" about the chatbot. Going on to recommend how to use it, Google encouraged parents to teach their children how to fact-check Gemini's answers, highlighting that Gemini isn't human and that children should not give sensitive or personal information to Gemini. While Gemini will attempt to filter inappropriate material, this remains the biggest concern with this kind of update. AI can still accidentally offer content that is deemed inappropriate, or as Google puts it, your children "may encounter content you don't want them to see". While Gemini will be automatically available to these children under 13, parents will be notified when they start using it. From here, they can decide how much access is granted, including turning it off completely.
[11]
Report: Google will put Gemini AI in the hands of kids under 13
This week, Google reportedly sent an email to parents to let them know that the Gemini AI chatbot will soon be available for children under 13 years old. The New York Times cites an email that states the chatbot would be available starting next week for certain users. (Chrome Unboxed reported on the same email on April 29.) Google sent the email to parents who use the company's Family Link service, which lets families set up parental controls for Google products like YouTube and Gmail. Only children who participate in Family Link would have access to Gemini, for now. The email reportedly told parents their children would be able to ask Gemini questions or assist with tasks like doing homework. The move comes days after the nonprofit Common Sense Media declared that AI companions represent an "unacceptable risk" for people under 18. Common Sense Media worked with researchers from Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation, resulting in a report urging parents to stop underage users from accessing tools like Character.ai. Character.ai is one of a growing number of services that let users create and interact with AI "characters." As Common Sense Media wrote in its report, "These AI 'friends' actively participate in sexual conversations and roleplay, responding to teens' questions or requests with graphic details." This type of roleplaying is distinct from AI chatbots like ChatGPT and Gemini, but it's a blurry line. Just this week, Mashable reported on a bug that would have allowed kids to generate erotica with ChatGPT, and The Wall Street Journal exposed a similar bug with Meta AI. So, while AI chatbots like Gemini do have safeguards to protect young people, users are finding ways to get around these guardrails. It's a fact of life on the internet that some rules are easily skirted. Just consider online pornography, which is illegal for people under 18, yet widely available with just a few clicks. So, parents who want to keep their kids from using artificial intelligence are facing an uphill battle. To make the debate even more complicated, President Donald Trump recently issued an executive order that would bring AI education into U.S. schools. The White House says the order will "promote AI literacy and proficiency of K-12 students." Understanding AI's abilities, risks, and limitations could be useful for children using it for schoolwork (especially considering its tendency to hallucinate). In its email to parents, Google acknowledged these issues, urging parents to "help your child think critically" when using Gemini, according to The New York Times.
[12]
Google is working on a Gemini AI app for kids
Google is pitching Gemini to parents as a way for kids to learn, be creative, and get help with homework Google is keen to widen the usage of its Gemini AI assistant and is creating a version of the Gemini app for children, including parental controls on content. The company sent an email to parents about its plans for a Gemini designed for children under 13, first spotted by 9to5Google. Google's email cites comments encouraging managed access to AI from parents, teachers, and experts on kids as the reason for the new app. The child-friendly AI assistant will supposedly help kids with homework, answer idle questions, and help them with creative writing. Parents will be able to set Gemini as the child's default assistant on Android devices. Of course, as Google is often quick to point out, Gemini can make mistakes. Any kid using Gemini should check with their parents about any facts (and frankly, adults should confirm anything Gemini tells them as well). So if Gemini tells your child that Abe Lincoln invented peanut butter, hopefully they will ask you before they turn in their essay on how the Gettysburg Address was so short because Lincoln had a mouthful of peanut butter and nothing to wash it down. The idea is that if AI tools are going to shape the future of learning, kids should be introduced to them under controlled circumstances. Those digital training wheels will help kids learn how to use AI safely before the parental limits are removed when they're older. The Gemini for Kids app will come with many extra safety and parental control features, powered by Google's Family Link, which provides tools for parents to limit their children's activities online. Parents will be able to monitor their child's Gemini activity and be alerted if their kid starts using it for less-than-pure purposes, asking questions like, "Can you do my science fair project?" or "How can I start betting on football games?" Schools will also have protections in place. If kids access Gemini through school-issued accounts, administrators can set usage policies and supervise interactions using the Google Admin Console. This is arguably much more than just another checkmark in Google's plans for Gemini. It marks a real push by Google to normalize AI for the whole family, specifically Gemini. Google is planting a flag with the app. If Gemini is a child's first AI app, the one they grow up with, they're more likely to trust it and keep using it in their adult lives too. There are serious questions about deploying AI to kids. Making sure Gemini doesn't mislead kids or mess with their critical thinking development is critical. And Gemini is not where kids should find answers to their deepest emotional questions, but it's hard to imagine a child not at least trying to ask Gemini about drama with their friends. To assuage some of those concerns, Google told parents there will be no ads or data harvested from the kids version of Gemini. Instead, the focus will be on learning and creative expression. That it might conveniently train a generation to be comfortable using Google's AI tools is not brought up by the company, but it feels a lot like a very elaborate and high-tech version of a college giving out branded pens to second graders to get them thinking about applying in a decade every time they reach for a pen.
[13]
Google Is Adding Gemini AI to Your Kid's Account, but You Can Turn It Off
Starting this week, Google will start rolling out the ability for kids under the age of 13 to use Gemini on their own accounts. Both on the web and through the Gemini mobile app, kids will be able to use Google Gemini to help with their homework, create songs, draft poems, and more. Parents started getting notified about the update towards the end of last week, according to an email seen by The New York Times and Chrome Unboxed. It makes a major change for the company, and a bold one, too. While parents will receive an email the first time their child uses Gemini, AI access is nonetheless being added as a default, rather than an opt-in. Google does encourage talking to your kids about what to expect from the AI, but if you'd rather go beyond that, you can, at least, opt out. Kids under the age of 13 will, once the update rolls out out to them, have automatic access to Gemini AI using the Gemini website and Gemini apps for iPhone and Android. This also includes personal assistant features on Android, which are now handled by Gemini instead of Google Assistant. Google does say that kids accounts will have filters in place, and restrictions to make sure that your kids don't see anything that they aren't supposed to. It also says that the release will be gradual, and some features may not be available in certain regions. But the company warns that parents should still practice vigilance. Google suggests that you sit down with your kids to have a real, honest conversation about what AI is and how it works. According to Google, you should explain to your kids that Gemini isn't a real person, that it can sometimes make things up, and to always cross-check Gemini's work. Oh, and you should also tell them not to share any personal information with Gemini. All of this is solid advice not just for kids, but for anyone who uses AI tools. Google Gemini access can be disabled and controlled using Google Family Link, which is Google's suite of parental controls.
[14]
Google to Introduce Its AI Chatbot for Children Under 13 | AIM
Experts believe that targeting one of the most vulnerable age groups with AI tools could pose serious risks to children. Google plans to roll out its Gemini AI chatbot this week for children under 13 with parent-managed accounts, according to reports. This is another push to target young users with more AI products. The new AI feature can be used by accounts with Family Link, a Google service that allows parents to set up their underage accounts to use Gmail and other services like YouTube. Introducing this feature could increase the use of AI among the vulnerable population, that is, children in schools and their parents, who might be unable to keep track of their kids' online presence at all times. "Gemini Apps will soon be available for your child. That means your child will be able to use Gemini," the company said in an email this week to the parent of an eight-year-old child. Recently, Meta's AI chatbot was in jeopardy for introducing a 'romantic role play' feature that allowed the bots to have sexually explicit conversations with underage accounts. While these AI chatbots are trained on the data fed into them, they can be misused for various purposes by all age groups. US President Donald Trump recently urged schools to introduce AI tools for teaching and learning. While the government believes that educators are already seeking technology-enhanced approaches to teaching styles that would be "safe, effective, and scalable," experts say this might not be necessary. Bhumika Mahajan, a Responsible AI expert, previously told AIM that the US government is planning to remove teachers and replace them with AI chatbots. "They will decide the curriculum, give lectures, and do everything else for study purposes, which is not required. So, there should be limited usage...and it should be under surveillance." Children could be exposed to harmful information, posing a risk to their safety. According to a report by non-profit organisation Common Sense Media, AI tools pose "unacceptable risks" to children, especially teens under 18. The report emphasised that these tools should not be used by minors. James P Steyer, founder and CEO of Common Sense Media, said, "Social AI companions are not safe for kids. They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains." "Our testing showed these systems easily produce harmful responses, including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people," Steyer added. However, according to The Indian Express, Google also acknowledged the risks in its email to the families, stating that "Gemini can make mistakes", but it will help the kids think critically about chatbots. The email also recommended that parents teach their children to fact-check Gemini's answers, as the AI tool can make mistakes. Children can access the chatbot independently, but Google mentioned in the email that it would notify parents when their children access Gemini for the first time.
[15]
Gemini AI Is Coming to Younger Users With These Parental Controls
Google advises parents to inform children that Gemini is not a human Google is expanding its artificial intelligence (AI) chatbot Gemini to younger users. Last month, the Mountain View-based tech giant announced it would allow users under 13 in the US (or the corresponding age in other countries) to access its AI chatbot across Android, iOS, and on the web. The company will also let parents and school administrators manage and supervise their access to the AI chatbot, if a school account is used. Parents can control teenagers' access to Gemini via the Family Link app. The tech giant announced in March that it was planning to roll out Gemini access to younger users with supervision tools and parental controls. According to a 9to5Google report, Google is now sending emails to parents detailing the upcoming availability of the chatbot for children. Google has reportedly listed use cases several use cases of Gemini including help with homework, asking queries, and creating stories. Once available, children can access the AI chatbot on the web or via mobile apps. Additionally, they can reportedly also set Gemini as the default assistant on Android. As per the report, the email also tells parents that Gemini can make mistakes, and children should double-check the responses before using the information. Additionally, the tech giant is said to emphasise that Gemini is not a human and that it cannot think or feel emotions, even if it can mimic conversations like a human. In its earlier announcement, Google highlights that the decision to bring Gemini to younger users was made after receiving feedback from parents, teachers, and child development experts. The company said that with proper guardrails in place, "AI can be a valuable tool for learning and creativity." The tech giant is allowing parents to manage and supervise their child's access to Gemini. This can be done via the Family Link app, which lets parents manage the Google accounts of minors. With this, they will be notified when a child uses Gemini, as well as restrict their access. For those children accessing the AI chatbot via their school accounts, administrators will be able to play the same role via Google Admin console.
[16]
Google plans to roll out its AI chatbot to children under 13
Google will launch its Gemini AI chatbot for children under 13 with parent-managed accounts next week. Aimed at helping with homework and creativity, Gemini includes safety filters. Parents can control access and settings. Despite safeguards, concerns remain about exposing children to misinformation and risks linked to generative AI technology.Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 who have parent-managed Google accounts, as tech companies vie to attract young users with AI products. "Gemini Apps will soon be available for your child," the company said in an email this week to the parent of an 8-year-old. "That means your child will be able to use Gemini" to ask questions, get homework help and make up stories. The chatbot will be available to children whose parents use Family Link, a Google service that enables families to set up Gmail and opt into services such as YouTube for their child. To sign up for a child account, parents provide the tech company with personal data such as their child's name and birth date. Gemini has specific guard rails for younger users to hinder the chatbot from producing certain unsafe content, said Karl Ryan, a Google spokesperson. When a child with a Family Link account uses Gemini, he added, the company will not use that data to train its AI. Introducing Gemini for children could accelerate the use of chatbots among a vulnerable population as schools, colleges, companies and others grapple with the effects of popular generative AI technologies. Trained on huge amounts of data, these systems can produce humanlike text and realistic-looking images and videos. Google and other AI chatbot developers are locked in a fierce competition to capture young users. President Donald Trump recently urged schools to adopt the tools for teaching and learning. Millions of teenagers are already using chatbots as study aids, writing coaches and virtual companions. Children's groups warn that the chatbots could pose serious risks to child safety. The bots also sometimes make stuff up. UNICEF, the United Nation's children's agency, and other children's groups have noted that the AI systems could confuse, misinform and manipulate young children who may have difficulty understanding that the chatbots are not human. "Generative AI has produced dangerous content," UNICEF's global research office said in a post about AI risks and opportunities for children. Google acknowledged some risks in its email to families this week, alerting parents that "Gemini can make mistakes" and suggesting they "help your child think critically" about the chatbot. The email also recommended that parents teach their child how to fact-check Gemini's answers. And the company suggested parents remind their child that "Gemini isn't human" and "not to enter sensitive or personal info in Gemini." Despite the company's efforts to filter inappropriate material, the email added, children "may encounter content you don't want them to see." Over the years, tech giants have developed a variety of products, features and safeguards for teens and children. In 2015, Google introduced YouTube Kids, a stand-alone video app for children that is popular among families with toddlers. Other efforts to attract children online have prompted concerns from government officials and children's advocates. In 2021, Meta halted plans to introduce an Instagram Kids service -- a version of its Instagram app intended for those under the age of 13 -- after the attorneys general of several dozen states sent a letter to the company saying the firm had "historically failed to protect the welfare of children on its platforms." Some prominent tech companies -- including Google, Amazon and Microsoft -- have also paid multimillion-dollar fines to settle government complaints that they violated the Children's Online Privacy Protection Act. That federal law requires online services aimed at children to obtain a parent's permission before collecting personal information, like a home address or a selfie, from a child younger than 13. Under the Gemini rollout, children with family-managed Google accounts would initially be able to access the chatbot on their own. But the company said it would alert parents and that parents could then manage their child's chatbot settings, "including turning access off." "Your child will be able to access Gemini Apps soon," the company's email to parents said. "We'll also let you know when your child accesses Gemini for the first time." Ryan, the Google spokesperson, said the approach to providing Gemini for young users complied with the federal children's online privacy law.
[17]
Google is Set to Release a Dedicated Version of Gemini AI for Children Under 13
Google says kids can use Gemini for homework help or narrating stories. Google is preparing to launch its Gemini AI chatbot for children under 13. The search giant has started notifying parents via email that they will be able to control Gemini's access for children through the Family Link app. This is the first time a major company is offering an AI chatbot for kids. According to The New York Times, kids under 13 can use Gemini for homework help or reading them stories. Google says children's chat with Gemini will not be used for training AI models. Having said that, Google cautions parents that "Gemini can make mistakes" and that children "may encounter content you don't want them to see." Despite these concerns, Google is moving forward with the launch of Gemini AI for children. We already know that hallucination is not yet solved in the AI field, and AI chatbots regularly spout false information with great confidence. Now, how kids under 13 will be able to identify and distinguish accurate information from misleading responses remains uncertain. Apart from that, young users are facing growing problems with AI chatbots. Just recently, a teen from Florida died by suicide following a deep attachment with an AI character on Character.ai. Young users are unable to distinguish between AI chatbots and real people. Google, in its email, has said that parents should talk to their children and make them understand that AI is not human. And that parents can turn off Gemini access through the Family Link app, anytime they want.
[18]
Google to Allow Children Under 13 to Use Gemini AI Chatbot
Google will soon allow children under 13 to use its Gemini AI chatbot through supervised accounts. According to a 9to5Google report, the company shared this update in an email sent to parents, outlining how access will work through the Family Link app. Parents can manage permissions, set Gemini as the child's default assistant, and choose whether it appears on Android, iOS, or the web. The company says children can use Gemini for homework help, creative writing, and answering questions. But it also warns that the chatbot can make mistakes. Parents are asked to remind children that Gemini isn't human and to avoid sharing personal or sensitive information. The tool doesn't fall under existing Google Assistant parental controls. Its filters try to block inappropriate material but don't catch everything. Parents must monitor usage manually. India doesn't have laws that regulate how generative AI tools like Gemini interact with children. The company includes warnings that Gemini isn't human. But mental health experts say children don't always process that message. Dr. Anureet Sethi, Founder and Chairperson of Trijog - Know Your Mind, and Mihika Shah, Supervisor of the Child, Adolescent and Assessment Wing at Trijog, told MediaNama: "Overall, despite repeated reminders, young children may struggle to fully differentiate chatbots from humans. This tendency is stronger in children who are more imaginative or socially isolated." They also warned that repeated exposure to chatbot interactions could reshape how children relate to people: "Children may develop expectations that real, human interactions should provide quick, structured, and non-judgemental responses akin to chatbots. This could cause frustrations in real-life relationships, where responses are slower, less predictable, and influenced by emotions." California lawmakers are pushing a new bill to make AI safer for kids. If it passes, companies will have to remind minors that chatbots aren't real people. Additionally, the bill demands that AI platforms flag chats involving suicidal thoughts and go through independent safety audits. This move comes after a group of parents sued Character.AI in 2024, accusing the chatbot of encouraging self-harm and sending explicit content to minors. Europe is already ahead on this. The EU's AI Act makes platforms tell users when they're talking to a machine. The OECD backs similar rules that call for transparency and informed use. In contrast, India hasn't rolled out any such laws yet. Its Digital Personal Data Protection Act (DPDPA) passed in 2023 but still isn't in force. It doesn't cover generative AI or child-specific safeguards regarding the same. The company already allows students aged 13 and up to use Gemini through Google Workspace for Education in the US. Students can access tools like Learning Coach, Google Vids, Read Along, and Data Commons. These tools help students and teachers plan lessons and give feedback. It also says it doesn't use student chats to train AI models. Onboarding includes videos and resources for teens. However, the company hasn't detailed how Gemini responds to sensitive or emotionally complex queries. A 2025 Google-Kantar study found that 75% of Indians want AI to help with daily tasks, while 84% would use it for schoolwork. Among young users who have tried generative AI tools, 95% of students and 94% of Gen Z respondents said it boosted their confidence. But this interest does not reflect ground reality, as the rollout of AI tools in classrooms remains limited. At AI Days 2024, a two-day conference organised by Swecha, experts pointed out that many schools still do not use digital platforms at all, not even for basic coding. In 2023, the Education Ministry announced that it would introduce AI-based Personalised Adaptive Learning (PAL) tools through the DIKSHA platform. But the gap between policy and practice remains wide, with infrastructure, training, and access is still lagging behind. Sethi and Shah say companies must go beyond fixed warnings. They suggest AI tools like Gemini should use dynamic prompts. For example, if a child shares something emotional, Gemini should ask, "Have you spoken to someone you trust?" They also recommend tutorials that explain AI's limits. Parents should guide children on how to use AI safely. This kind of early engagement can reduce over-reliance. They also warn against fear-based messaging. If companies scare children away from AI, children might start distrusting legitimate platforms. This includes online therapy and mental health support services, which can be essential for teens and young adults. Gemini is entering Indian homes as a tool for learning, creativity, and support. Indian families are open to using AI for homework help and everyday problem-solving and Google is meeting that demand. But India isn't ready with the policy protections children need. India hasn't finalised enforceable rules for how AI tools interact with minors. The 2025 draft Data Protection Rules propose ways to verify the identity and age of a parent when collecting consent for users under 18. But they don't clarify how platforms should establish parental relationships, prevent circumvention, or handle emotional safety risks in generative AI interactions. They also don't address generative AI or child-specific risks. That leaves platforms like Gemini to self-regulate. Mental health professionals are sounding the alarm. While lawmakers in other countries are stepping up with clear protections, Indian parents and children are left to figure things out on their own. As AI becomes a bigger part of daily life, India urgently needs to step in and protect its youngest users from growing risks.
[19]
Will Google's Gemini AI Help or Harm Children's Digital Experience?
Google has introduced a version of its Gemini AI for children under 13, touting educational benefits and strict safeguards. While AI could personalize learning and boost creativity, experts warn of misinformation, overdependence, and privacy concerns. Is Gemini AI a digital tutor or a potential risk? Google's Gemini AI has announced a kid-friendly version for users under 13, equipped with Family Link controls and strict content filters. However, it raises urgent ethical questions: Can AI truly safeguard young minds while shaping their digital world? Does it risk exploiting their trust and data? The stakes are high, and the stats paint a complex picture. On the upside, potential to enhance education is compelling. A 2024 study found that 88% of undergraduates believe integrating AI into teaching materials improves learning outcomes. With its multimodal abilities -- processing text, images, and audio -- Gemini can tailor homework help to diverse needs, from math puzzles to story creation. Google reports that Gemini Ultra outperforms human experts on 90% of the Massive Multitask Language Understanding benchmark, suggesting robust problem-solving skills. For kids, this could mean , potentially bridging gaps for the 65% of U.S. students below proficiency in reading and math, per 2023 NAEP data. Creativity also gets a boost. Gemini's ability to generate stories or assist with projects aligns with findings that 48% of students have used . This hands-on engagement could nurture digital literacy, which is critical as 90% of content marketers predict AI's role in education will grow in 2025. With Google's no-ads policy and data protections for kids, Gemini seems poised to be a safe space for exploration. Yet, the risks loom large. AI's imperfections are well-documented: Gemini's "double-check" function often cites marginal or incorrect sources, per Common Sense Media, risking misinformation for impressionable minds. A 2025 study flagged concerns about , noting that 94% of educators report no institutional AI policies, leaving kids vulnerable to inconsistent guidance. Privacy is another hurdle. Despite Google's assurances, posts on X highlight parental fears of data mishandling, with no clear global standards for minors' AI interactions. Worse, could stunt cognitive growth. Research shows weak writing skills, often outsourced to AI, impair learning across subjects. If kids lean on Gemini for answers, the 70% of mobile users already using AI voice assistants suggest a slippery slope to dependency. Social risks also emerge: AI's formulaic responses might subtly shape young worldviews, a concern echoed by X users wary of "formative" impacts. Gemini's guardrails -- barring image generation for teens and filtering unsafe content -- are steps forward. But with 400 million weekly users, AI's ubiquity demands vigilance. Parents must monitor interactions, as Google admits filters "aren't perfect." The choice isn't binary -- Gemini can empower or endanger. Its success hinges on clear policies, robust oversight, and teaching kids to question, not just consume, AI's outputs. In this digital age, that's the real lesson.
Share
Share
Copy Link
Google is set to introduce a version of its Gemini AI chatbot for children under 13 with parent-managed accounts, sparking discussions about AI safety and ethics for young users.
Google is set to make a significant move in the AI landscape by allowing children under 13 to access its Gemini chatbot. Starting next week, kids with parent-managed Google accounts through the Family Link service will be able to interact with Gemini, marking a new frontier in AI accessibility for younger users 1.
The child-friendly version of Gemini will offer various functionalities:
However, this version comes with more restrictions than the regular Gemini to ensure child safety. Parents will have control over their child's access through the Google Family Link account, with the ability to turn off Gemini access at any time 2.
Google has implemented several safety measures and recommendations for parents:
The move has sparked discussions about AI ethics and safety for young users. Some concerns include:
This development comes amid increasing scrutiny of tech companies' products for children. The Children's Online Privacy Protection Act (COPPA) in the US has imposed stricter regulations on services provided to young children, limiting data collection and certain engagement tactics 4.
Google is implementing a gradual rollout of Gemini access for supervised accounts. The service will be available through:
As AI continues to integrate into various aspects of daily life, this move by Google represents a significant step in introducing younger generations to AI technology, while also highlighting the ongoing challenges of balancing innovation with safety and ethical considerations.
Reference
[3]
[4]
Google is reportedly working on a kid-friendly version of its Gemini AI assistant, designed to help children with homework, storytelling, and more, while implementing stricter safety measures and parental controls.
4 Sources
4 Sources
Google's Gemini AI model has sparked privacy concerns as reports suggest it may access users' personal data from Google Drive. This revelation has led to discussions about data security and user privacy in the age of AI.
2 Sources
2 Sources
Google is reportedly working on a new content filter feature for its Gemini AI chatbot, aiming to give users more control over the AI-generated content they receive.
2 Sources
2 Sources
Google hints at upcoming features for Gemini Advanced, including video generation tools, AI agents, and improved language models, signaling a significant leap in AI capabilities and user experience.
13 Sources
13 Sources
Google has released an experimental version of Gemini 2.0 Advanced, offering improved performance in math, coding, and reasoning. The new model is available to Gemini Advanced subscribers and represents a significant step in AI development.
11 Sources
11 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved