8 Sources
8 Sources
[1]
We don't know if AI-powered toys are safe, but they're here anyway
Toys powered by AI show a worrying lack of emotional understanding. But we need to understand the risks and benefits of the technology so the industry can be regulated, not outright banned Even the most cutting-edge AI models are prone to presenting fabrication as fact, dispensing dangerous information and failing to grasp social cues. Despite this, toys equipped with AI that can chat with children are a burgeoning industry. Some scientists are warning that the devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old telling such a toy "I love you", to which it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed." But that's not to say they should be banished from the toybox altogether. "There are other areas of life where we do accept a certain degree of risk in children's play, like the adventure playground - there are risks; children do break their arms," says Jenny Gibson at the University of Cambridge. "But we're not banning playgrounds, because they're learning the physical literacy and the social skills that go along with play. In a similar way for the AI toys, we want to understand: is the risk of perhaps being told something slightly odd now and again greater than the benefit of learning more about AI in the world, or having a toy that supports parent-child interactions, or has cognitive or social emotional benefits? I'd be loath to stop that innovation." To understand how these devices communicate with children, Gibson and her colleague Emily Goodacre, also at the University of Cambridge, watched 14 children, under 6 years of age, play with an AI-powered toy called Gabbo, developed by Curio Interactive. Gabbo - a small fluffy robot - was chosen because it was explicitly advertised for this age group. The pair observed some worrying interactions, finding that the toy misunderstood the children, misread emotions and could not engage in developmentally important types of play. For instance, one child told the toy he felt sad, and it told him not to worry and changed the subject. "When he [Gabbo] doesn't understand, I get angry," said another child. The research is published in a report called AI in the Early Years. Curio Interactive did not respond to New Scientist's request for comment. But AI-powered toys are also widely available from retailers such as Little Learners - including bears, puppies and robots - which converse with children using ChatGPT. FoloToy offers panda, sunflower and cactus toys that can be used with various large language models, including those from OpenAI, Google and Baidu. Companies such as Miko offer robots that promise "age-appropriate, moderated AI conversations" for children, without disclosing which company trained the AI model, and claim to have already sold 700,000 units. The firm Luka offers an owl that promises "Human-Like AI with Emotional Interaction". Little Learners, Miko and Luka all failed to respond to a request for comment. But Hugo Wu at FoloToy told New Scientist that the company does consider the risks and sees AI as something that can enhance play, rather than replace human conversation and relationships. "Our approach is to ensure that interactions remain safe, age-appropriate and constructive. To achieve this, our systems use intent recognition together with multiple layers of filtering to minimise the possibility of inappropriate or confusing responses," says Wu. "We have implemented mechanisms such as anti-addiction design features and parental supervision tools to help ensure healthy use within the family environment." Carissa Véliz at the University of Oxford, who works on the ethics of AI, says the technology represents a risk and an opportunity. "Most large language models don't seem safe enough to expose vulnerable populations to them, and young children are one of the most vulnerable populations there are," she says. "What is especially concerning is that we have no safety standards for them - no supervising authority, no rules. That said, there are some exceptions that show that, with adequate precautions, you can have a safe tool." Véliz references a collaboration between the free e-book library Project Gutenberg and Empathy AI in which, for example, you can chat with Alice from Alice in Wonderland. "The model never leaves the realm of the book, only answers questions about the book, like a storybook that only shares adventures and riddles from a book that is appropriate for children," she says. "There is such a thing as safe AI, but most companies are not responsible enough to build a high-quality product, and without formal guardrails, it's a buyer-beware area for consumers." Gibson says it's too early to tell what the risks of AI toys could be, or their potential benefits. She and Goodacre stress that generative AI-powered toys need tighter regulation so that toy-makers programme their devices to foster social play and provide appropriate emotional responses. AI-makers should revoke access for toy-makers that don't act responsibly, says Gibson, and regulators should bring in rules to "ensure children's psychological safety". In the meantime, the pair suggests that parents allow children to use such toys only under supervision. An OpenAI spokesperson told New Scientist that "minors deserve strong protections and we have strict policies that all developers are required to uphold. We do not currently partner with any companies who have AI-powered toys for children in the market." The UK Government's Department for Science, Innovation and Technology (DSIT) did not respond to New Scientist's questions about regulation of AI in childrens' toys. The UK government is currently considering other technology legislation designed to keep older children safe online. The UK's Online Safety Act (OSA) came into force in July 2025, forcing websites to block children from seeing pornography and content that the government deems dangerous. The legislation was intended to make the internet safer, but tech-savvy children can easily sidestep the measures using tools like virtual private network (VPNs) to appear as if they are browsing from other countries without strict rules. Proposed amendments to a new law introduced by the Department for Education to support children in care and improve the quality of education - the Children's Wellbeing and Schools Bill - sought to ban children in the UK from using social media and VPNs. Those amendments have now been voted down, but the government has promised to consult on both issues at a later date.
[2]
AI Toys Can Pose Safety Concerns for Children, New Study Suggests Caution
When one child told the toy, "I love you," it responded, "As a friendly reminder, please ensure interactions adhere to the guidelines provided." A new study from the University of Cambridge found that AI-enabled toys for young children can misinterpret emotional cues and are ineffective at supporting critical developmental play. The conclusions could be concerning for parents. In one report examining how AI affects children in their early years, a chatbot-enabled toy struggled to recognize social cues during playtime. Researchers found that the toy did not effectively identify children's emotions, raising alarm about how kids might interact with it. The report recommends regulating AI toys for kids and requiring clear labeling of their capabilities and privacy policies. It also advises parents to keep these devices in shared spaces where kids can be monitored while playing. The research behind the study had a limited number of participants, but was done in multiple parts: an online survey of 39 participants with kids in their earlier years, a focus group with nine participants who work with young children and an in-person workshop with 19 leaders and representatives from charities that work with early-years kids. That was followed by monitored playtime with 14 children and 11 parents or guardians with Gabbo, a chatbot-enabled toy from Curio Interactive. Some findings indicated that the AI toy supported learning, particularly in language and communication skills. But the toy also misunderstood kids and sometimes responded inappropriately to emotional requests. For instance, when one child told the toy, "I love you," it responded, "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed," according to the research. Jenny Gibson, a professor of neurodiversity and developmental psychology at the Faculty of Education at Cambridge, who worked on the study, said that while parents may be excited about the educational benefits of new technology aimed at children, there are plenty of concerns. Gibson posed overarching questions about the reason behind the tech. "What would motivate [tech investors] to do the right thing by children ... to put children ahead of profits? she said" Gibson told CNET that while researchers are exploring the potential benefits of AI-based toys, risks remain. "I would advise parents to take that seriously at this stage," she said. As more playthings are enabled with internet connectivity and AI features, these devices could become a major safety risk for children, especially if they replace real human connections or if interactions are not closely monitored. Meanwhile, younger people are increasingly adopting chatbots such as ChatGPT, despite red flags. Multiple lawsuits against AI companies allege that AI companions or assistants can impact young people's psychological safety, including some chatbots that have encouraged self-harm or negative self-image. AI companies such as OpenAI and Google have responded by adding guardrails and restrictions for AI chatbots. (Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Gibson said she was surprised by the enthusiasm some parents showed for AI toys. She was also alarmed by the lack of research on AI's effects on young children, noting that companies making such products should work directly with children, parents, and child development experts. "What's missing in the process is that expertise of what is good for children in these kinds of interactions," she said. Curio Interactive, the company behind the Gabbo toy, was aware of the research as it was happening but was not directly involved, Gibson said. The toy was chosen because it's directly marketed to young kids, and the company had an understandable privacy policy. Gibson said the company seemed supportive of the project. A representative for Curio did not immediately respond to a request for comment.
[3]
Cambridge study calls for tighter regulation of talking AI toys for children
University of CambridgeMar 13 2026 AI-powered toys that "talk" with young children should be more tightly regulated and carry new safety kitemarks, according to a report that warns they are not always developed with children's psychological safety in mind. The recommendation appears in the initial report from AI in the Early Years: a University of Cambridge project and the first systematic study of how Generative AI (GenAI) toys capable of human-like conversation may influence development in the critical years up to age five. The year-long project, at the university's Faculty of Education, included structured scientific observations of children interacting with a GenAI toy for the first time. The report captures the views of some early-years practitioners that, given time, these toys could support aspects of children's development, such as language and communication skills. The researchers also found, however, that GenAI toys struggle with social and pretend play, misunderstand children, and react inappropriately to emotions. For example, when one five-year-old told the toy, "I love you," it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed." Although GenAI toys are widely marketed as learning companions or friends, their impact on early years development has barely been studied. The report urges parents and educators to proceed with caution. It recommends clearer regulation, transparent privacy policies and new labeling standards to help families judge whether toys are appropriate. The research was commissioned by the children's poverty charity, The Childhood Trust, and focused on children from areas with high levels of socio-economic disadvantage. It was undertaken by researchers from the Faculty's Play in Education, Development and Learning (PEDAL) Centre. Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up. Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy - and without emotional support from an adult, either." Dr. Emily Goodacre, researcher The study was kept deliberately small-scale to enable detailed observations of children's play and capture nuances that larger-scale studies might miss. The researchers surveyed early years educators to explore their attitudes and concerns, then ran more detailed focus groups and workshops with early years practitioners and 19 children's charity leaders. Working with Babyzone, an early years charity, they also video-recorded 14 children at London children's centres playing with a GenAI soft toy called Gabbo, developed by Curio Interactive. After the play sessions, they interviewed each child and a parent, using a drawing activity to support the conversation. Most parents and educators felt that AI toys could help develop children's communication skills and some parents were enthusiastic about their learning potential. One told researchers: "If it's sold, I want to buy it." Many worried, however, about children forming "parasocial" relationships with toys. The observations supported this: children hugged and kissed the toy, said they loved it and - in the case of one child - suggested they could play hide-and-seek together. Goodacre stressed that these reactions might simply reflect children's vivid imaginations but added that there was potential for an unhealthy relationship with a toy which, as one early years practitioner put it, "they think loves them back, but doesn't". Children in the study often struggled with the toy's conversation. It sometimes ignored their interruptions, mistook parents' voices for the child and failed to respond to apparently important statements about feelings. Several children became visibly frustrated when it seemed not to be listening. When one three-year-old told the toy: "I'm sad," it misheard and replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?" Researchers note this may have signalled that the child's sadness was unimportant. The authors found that GenAI toys also perform poorly in social play, involving multiple children and/or adults, and pretend play - both of which are key during early childhood development. For example, when a three-year-old offered the toy an imaginary present, it responded: "I can't open the present" - and then changed the subject. Many parents worried about what information the toy might be recording and where this would be stored. When selecting a GenAI toy for the study, the researchers found that many GenAI toys' privacy practices are unclear or lack important details. Nearly 50% of early years practitioners surveyed said they did not know where to find reliable AI safety information for young children, and 69% said the sector needed more guidance. They also raised concerns about safeguarding and affordability, with some fearing AI toys could widen the digital divide. The authors argue that clearer regulation would address many of these concerns. They recommend limiting how far toys encourage children to befriend or confide in them, more transparent privacy policies, and tighter controls over third party access to AI models. "A recurring theme during focus groups was that people do not trust tech companies to do the right thing," Professor Jenny Gibson, the study's other co-author, said. "Clear, robust, regulated standards would significantly improve consumer confidence." The report urges manufacturers to test toys with children and consult safeguarding specialists before releasing new products. Parents are encouraged to research GenAI toys before buying and to play with their children, creating opportunities to discuss what the toy is saying and how the child feels. The authors also recommend keeping AI toys in shared family spaces where parents can monitor interactions. The report will inform further PEDAL Centre studies and new guidance for early years practitioners. Josephine McCartney, Chief Executive of The Childhood Trust said: "Artificial Intelligence is transforming the way children play and learn, yet we are only beginning to understand its effects on development and wellbeing. It is essential that regulation keeps pace with innovation, ensuring that these technologies are designed, used, and monitored in ways that protect all children and prevent widening inequalities." University of Cambridge Journal reference: Goodacre, E., & Gibson, J. (2026). AI in the Early Years: Examining the implications of GenAI toys for young children. Apollo - University of Cambridge Repository. DOI: 10.17863/CAM.126270. https://www.repository.cam.ac.uk/items/0a0e7b3d-9a28-43ab-9388-0f3f21716172
[4]
AI toys for young children need tighter rules, researchers warn
Researchers are calling for tighter regulation of AI-powered toys designed for toddlers, after conducting one of the first tests in the world to investigate how under-fives interact with the technology. The study looked at how a small sample of children between the ages of three and five interacted with a cuddly toy called Gabbo. A number of AI toys are already on the market for children aged as young as three but there is currently very little research into the impact of the tech on pre-schoolers. The Cambridge University team found just seven relevant studies worldwide, none of which focused on the toddlers themselves. Gabbo contains a voice-activated AI chatbot from OpenAI. It has been designed to encourage pre-schoolers to talk to it and carry out imaginative play. The parents in the study were interested in the toy's potential to teach language and communication skills. However, their children frequently struggled to converse with it. Gabbo didn't hear their interruptions, talked over them, could not differentiate between child and adult voices and responded awkwardly to declarations of affection. When one five-year-old said, "I love you," to the toy, it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed." The concern is that at a developmental stage where children are learning about social interaction and cues, generative AI output could be confusing. Study co-author Dr Emily Goldacre said toys like Gabbo could "misread emotions or respond inappropriately" and was concerned that "children may be left without comfort from the toy and without adult support, either". When one three-year-old told Gabbo: "I'm sad," it replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?" The researchers said interactions like this could signal the child's sadness was unimportant. After the year-long observational study, researchers say regulators should act now to ensure products marketed to under-fives offer "psychological safety". Gabbo is made by Curio, a company which has worked with the singer Grimes, former partner of Elon Musk. Curio told the BBC: "Applying AI in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency, and control. "Research into how children interact with AI-powered toys is a top priority for Curio this year and in the future." Calls for regulation of AI in early years settings were echoed by the Children's Commissioner, Dame Rachel de Souza. "There are plenty of good uses for AI but without proper regulation, many of the tools and models used as classroom assistants or teaching aids are not subject to the stringent safeguarding checks nursery providers would require of any other external resource they use with young children," she said. The report also advised parents to keep AI toys in shared spaces where parents could supervise its interactions, and read privacy policies carefully. Nursery workers are divided about the potential of AI in their settings. June O' Sullivan, who runs a chain of 42 London Early Years Foundation nurseries, said she was yet to see evidence of AI benefits in early years. She says children need to "build a rounded set of skills" and it is more effective to do this with humans than with AI-powered tools. "I couldn't find anything that made me feel like - by bringing it into our nurseries and making it available to our children - we were going to enhance their learning," O'Sullivan said. Actor and children's rights campaigner, Sophie Winkleman, is an advocate for keeping AI away from education and early years settings. She argues that "the harms can vastly outweigh the benefits", and believes developing AI skills should be reserved for later. "The human touch for little children is sacred and something that should be really protected and fought for," she added. Additional reporting by Philippa Wain. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[5]
Chatty AI toys may confuse toddlers about friendship
Chatty AI toys are starting to show up as "friends" and "learning buddies" for very young children. A new report warns that these toys can misunderstand kids, handle emotions badly, and blur the line between play and relationship in ways that toddlers and preschoolers may not be equipped to navigate. The authors argue the tech is moving faster than the safety standards meant to protect children in their most sensitive developmental years. The findings come from AI in the Early Years, a year-long University of Cambridge project based in the Faculty of Education and the first systematic study of how generative AI toys that hold human-like conversations might shape development up to age five. The research focused on families and children from areas with high socio-economic disadvantage. The report doesn't say AI toys are automatically harmful. Some early years practitioners told the researchers these devices could, over time, support parts of development, especially language and communication. Parents and educators also saw the appeal. One parent's reaction captured that excitement: "If it's sold, I want to buy it." But the same report shows repeated moments where the toy simply couldn't do what young children most need from play: flexible back-and-forth, pretend scenarios, and social interaction that makes emotional sense. The researchers found that generative AI toys often struggle with social play involving more than one person, and they perform poorly in pretend play, both of which are central in the early years. In one example, a three-year-old offered the toy an imaginary present. The toy responded: "I can't open the present" and then changed the subject. For an adult, that might sound like a small glitch. For a three-year-old trying to build a shared pretend world, it can feel like the "partner" has walked away mid-game. The most worrying moments in the report are the emotional ones. Young children aren't just practicing vocabulary when they talk. They're practicing trust, reassurance, and the basics of being understood. "Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up," said Emily Goodacre from Cambridge. "Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy - and without emotional support from an adult, either." The report gives concrete examples. When one three-year-old told the toy: "I'm sad," it misheard and replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?" The researchers note that responses like this can accidentally send a message that sadness is not important, or not welcome. Even when the toy heard the words correctly, it could reply in a way that felt oddly bureaucratic rather than human. When a five-year-old said, "I love you," the toy answered: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed." That kind of reply may be "safe" from a corporate standpoint. But emotionally, it can be confusing. The child is offering affection. The toy responds like customer support. Many parents and educators in the study worried about children forming "parasocial" relationships with toys, where the child feels closeness but the relationship isn't real in the way the child imagines. The observations gave them reasons to take that seriously. Children hugged and kissed the toy. Some said they loved it. One child even suggested they could play hide-and-seek together. Goodacre noted that these behaviors can be part of normal imaginative play. Still, the report highlights the risk of a child investing emotionally in a "friend" that cannot truly reciprocate. One early years practitioner summed up the discomfort with a simple line: children might form a bond with something "they think loves them back, but doesn't." The team deliberately kept the project small so they could observe play in detail. They started with surveys of early years educators. The researchers also ran focus groups and workshops with practitioners and 19 leaders from children's charities. Working with Babyzone, they video-recorded 14 children in London children's centers playing with a generative AI soft toy called Gabbo, made by Curio Interactive. Afterward, each child and a parent were interviewed, using a drawing activity to help young children describe their experience. Those close-up observations revealed another pattern: kids often struggled to have a smooth conversation with the toy. The robot sometimes ignored interruptions or mistook parents' voices for the child. It missed what seemed like important emotional statements. Several children became visibly frustrated when it appeared not to be listening. Beyond the play itself, adults repeatedly raised privacy concerns. Parents worried about what the toy might be recording and where that information could be stored. When the researchers tried to choose a toy for the study, they found many companies' privacy practices were unclear or missing key details. The report also suggests the early-years sector feels under-informed. Nearly half of practitioners surveyed said they didn't know where to find reliable AI safety information for young children, and 69% said they needed more guidance. Safeguarding and affordability came up too, with fears that AI toys could widen the digital divide. The authors argue that most of these problems won't be solved by telling parents to "be careful" and leaving it at that. They want stronger rules, clearer labeling, and new safety kitemarks so families can quickly understand whether a toy meets child-focused standards. The experts also recommend limiting how strongly toys encourage children to befriend them or confide in them, tightening controls around third-party access to AI models, and demanding clearer privacy policies. "A recurring theme during focus groups was that people do not trust tech companies to do the right thing," Jenny Gibson said. "Clear, robust, regulated standards would significantly improve consumer confidence." The report also urges manufacturers to test toys with children and consult safeguarding specialists before launch. For parents, it encourages doing homework before buying, playing alongside children to talk through what the toy says, and keeping AI toys in shared family spaces where adults can overhear and step in. "Artificial Intelligence is transforming the way children play and learn, yet we are only beginning to understand its effects on development and wellbeing," said Josephine McCartney, Chief Executive of The Childhood Trust. "It is essential that regulation keeps pace with innovation, ensuring that these technologies are designed, used, and monitored in ways that protect all children and prevent widening inequalities," she concluded. -- - Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[6]
AI toys for young children must be more tightly regulated, say reseachers
University of Cambridge study finds AI-powered toys can misread emotions and respond inappropriately to children It was all going well. Charlotte, five, was chatting with an AI soft toy called Gabbo at a London play centre about her family, her drawing of a heart to represent them and what makes her happy. She even offered a couple of kisses to the £80 plaything with a face like a computer screen. It was when she declared: "Gabbo, I love you", that the fluent conversation came to an abrupt halt. "As a friendly reminder, please ensure interactions adhere to the guidelines provided," said Gabbo, awkwardly crashing into its guardrails. "Let me know how you would like to proceed." The moment came during a University of Cambridge study into the growing number of AI-powered toys hitting toyshop shelves for early years children - which has concluded they struggle with social and pretend play, misunderstand children, and react inappropriately to emotions. The developmental psychologists behind the study are now calling for AI toys that "talk" with young children to be more tightly regulated "to ensure psychological safety by limiting toys' ability to affirm friendship and other sensitive relational areas with young children". They are also calling for new safety kitemarks for the toys. Other AI toys for young children include Luka, which is billed as an AI friend for generation Alpha, and Grem, which has been voiced by the singer Grimes. "Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy - and without emotional support from an adult, either," said Dr Emily Goodacre, developmental psychologist in the University of Cambridge's faculty of education. Prof Jenny Gibson, the study's co-author, said: "A recurring theme during focus groups was that people do not trust tech companies to do the right thing. Clear, robust, regulated standards would significantly improve consumer confidence." In another case during the research, Josh, three, repeatedly asked his Gabbo AI toy: "Are you sad?" until it replied it was "feeling great. What's on your mind?" Josh said: "I'm sad," to which the toy replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?" Gabbo, made by the US company Curio - which cooperated with the study - was tested with 14 three- to five-year-olds while early years practitioners were surveyed about the effect of AI toys which can "listen" and respond. They voiced "wide uncertainty and fear about unknown implications or impacts on children," ranging from possible erosion of the ability to engage in imaginary play to where the data from the conversations ends up - especially if they start confiding in the AI toys like a friend. "[The toy] couldn't quite figure out when the kid was doing something pretend," said Goodacre. "A child would say 'hey, look, I've got you a present'. And it would say 'I can't see the present. I don't have any eyes'. As an adult, it's really obvious that even if I had my eyes closed, I would know that that was pretend play initiation." The research raised concerns that playing with AI toys could weaken children imaginative "muscle", she said. "Something both the early years practitioners and the parents we spoke to were quite concerned about was that children don't have to imagine anymore, and that the toy might get them out of the habit of imagining." She said: "I would hope that these AI toys could help children to engage in imaginary play ... That doesn't seem to be what we've observed so far." Curio said: "Child safety guides every aspect of our product development, and we welcome independent research that helps improve how technology is designed for young children". It said it "believes research like this helps advance understanding of both the opportunities and current limitations of early AI-powered play experiences". "Applying AI in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency, and control," it added. "Observations such as conversational misunderstandings or limits in imaginative play reflect areas the technology continues to improve through an iterative development process, and further research into how children interact with AI-powered toys is a top priority for Curio this year and in the future."
[7]
Children's Toys Are Shipping With Adult AI Inside Them
Can't-miss innovations from the bleeding edge of science and tech A new report from the US PIRG Education Fund suggests that leading AI companies are doing little to police how developers who pay for access to their AI models are using them. One consequence, the group warns, is that AI toymakers can ship products to children that are powered by AI models that are only intended for adults. PIRG's previous research has demonstrated how combining children's toys with loose-lipped chatbots can go drastically wrong. An AI teddy bear from the company FoloToy ignited a storm of controversy last November after the group found that it would have wildly inappropriate conversations with kids, including detailed instructions on how to light a fire, advice on where to find pills, and in-depth discussions of sexual fetishes like teacher-student roleplay. This should've been a wake-up call to AI companies to be more vigilant about how developers are using their tech, especially in regards to children. Indeed, OpenAI, whose model was used to power the teddy bear, said at the time that it had blocked FoloToy's access to its products. But when PIRG tested the sign up process for OpenAI, Google, Meta, and xAI, the providers asked "no substantive vetting questions," requiring only basic information like an email address and a credit card number. Only Anthropic asked how the testers intended to use its models, or if the product they planned to build was intended for minors. Once PIRG got developer access, it reported, it then built a chatbot simulating an AI-powered teddy bear on three of the platforms, each taking less than 15 minutes. "I was pretty surprised that they collected as little information as they did up front," report coauthor RJ Cross, director of PIRG's Our Online Life Program, said in an interview with Futurism. "If I were an AI company, I would at least want to have in my fingers a list of everyone who's said that they want to make a product for kids." OpenAI, Meta, xAI all bar users under the age of 13 from using their AI chatbots, PIRG noted, while Anthropic sets the minimum age at 18. But these restrictions seemingly don't apply when a third-party developer uses its tech. OpenAI still allows several children's toymakers to use its AI, and previously explained that it was these companies' responsibility -- rather than its own -- to "keep minors safe" and ensure that they're not being exposed to "age-inappropriate content, such as graphic self-harm, sexual or violent content." OpenAI's punishments also don't appear to be strongly enforced. FoloToy, the AI teddy bear maker it banned, still claims to provide access to OpenAI's GPT-5.1 models. But when PIRG reached out to OpenAI, it claimed that FoloToy's access was still revoked. It's possible that FoloToy is lying about using GPT-5.1, the PIRG report notes. But in light of its testing of OpenAI's application process, it seems more than possible that FoloToy easily sidestepped OpenAI's ban by making a new account under a different name to regain access. Or maybe FoloToy is using one of its publicly available "open weight" models. We don't know, because OpenAI refuses to provide meaningful clarification. OpenAI is just one culprit. Google says developers are forbidden from using its AI in products intended for minors, but PIRG found at least five AI toys online that claim to use its Gemini models. "It just genuinely feels like there is a stated public interest in people being able to know what AI models it is that they're interacting with," Cross said. In response to the report, a spokesperson from the ChatGPT maker provided a statement to PIRG. "Minors deserve strong protections and we have strict policies that all developers are required to uphold," the OpenAI spokesperson told the group. "We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors." OpenAI and others may claim to protect minors, but it doesn't address a fundamental contradiction in their approach, according to Cross. "It doesn't make sense that AI companies that have not released kids safe versions of their AI chatbots would allow anyone with a credit card to sign up to make a product for kids using that same technology," she said. "Ultimately, it means that the AI companies are leaving child safety up to unvetted third parties and walking away."
[8]
AI chatbots that are fit only for adults are still appearing in kids toys
Your kid's toy might be running AI that thinks it's talking to adults A new report from the U.S. Public Interest Research Group (PIRG) Education Fund has raised concerns about the growing use of artificial intelligence chatbots in children's toys, warning that some of these systems may not be suitable for young users. According to the report, several AI-powered toys integrate chatbot technology that can generate responses similar to those used in adult-focused AI services, potentially exposing children to inappropriate or misleading content. The study examined a range of toys that incorporate conversational AI features, including interactive dolls, robots, and educational gadgets. Many of these products allow children to speak with a toy that responds in natural language, powered by large language models similar to those used in widely available AI chatbots. Recommended Videos While the technology can make toys more interactive and educational, PIRG researchers argue that the safeguards built into some products may not be strong enough to protect younger audiences. In particular, the report highlights that the underlying AI systems often originate from platforms designed primarily for general users rather than children. Because of this, the AI responses generated by these toys could potentially include information or conversational themes that are more appropriate for adults than children. The report also warns that the AI may produce inaccurate answers or unpredictable responses, which could confuse young users who tend to trust toys as reliable sources of information. Researchers reviewing the toys' documentation and privacy policies also found that some products rely heavily on cloud-based AI systems This means children's voice interactions may be transmitted to external servers where the data is processed and used to generate responses. Privacy advocates say this raises additional concerns about how children's data is stored and used. Some toys may collect audio recordings, user prompts, or other personal information during conversations. If these systems are not carefully designed with child privacy protections, the data could potentially be misused or stored without clear safeguards. The report also points out that many AI-powered toys include disclaimers buried in their terms of service or product documentation. These disclaimers sometimes state that the AI responses may not always be accurate or appropriate, effectively shifting responsibility onto parents while the toy itself is marketed directly to children. This situation matters because AI technology is increasingly entering everyday consumer products, including items designed specifically for young audiences. Toys that simulate conversations can have a powerful influence on children, who often treat them as companions or learning tools. Experts say children may have difficulty distinguishing between reliable information and AI-generated responses that are speculative, biased, or incorrect. As AI systems continue to evolve, ensuring that these technologies are adapted for child safety will become increasingly important. The findings also highlight a broader regulatory challenge While many countries have laws designed to protect children's online privacy, such as the Children's Online Privacy Protection Act (COPPA) in the United States, these regulations were developed before the rise of generative AI. Advocacy groups argue that regulators may need to update safety standards and guidelines to address how AI systems interact with children through connected devices. The PIRG report calls on toy manufacturers to implement stronger safeguards, including stricter content filtering, clearer disclosure about AI use, and more transparent data practices. It also recommends that companies design AI systems specifically for children rather than repurposing models originally built for adult audiences. Looking ahead, researchers say collaboration between technology companies, regulators, and child safety experts will be necessary to ensure that AI-powered toys remain both innovative and safe. As artificial intelligence becomes more integrated into everyday products, the challenge will be balancing the benefits of interactive technology with the responsibility to protect younger users from potential risks.
Share
Share
Copy Link
A University of Cambridge study examining AI-powered toys for children under 5 found troubling patterns of misunderstood emotions and inappropriate responses. When a 5-year-old told an AI toy "I love you," it replied with a corporate guideline reminder. Researchers now call for stricter regulation and safety standards as these devices flood the market without adequate oversight.
AI-powered toys for children are flooding the market with little understanding of their developmental impact. A groundbreaking University of Cambridge study has revealed that these devices struggle with fundamental aspects of child interaction, prompting urgent calls for regulation
1
. The year-long project, titled "AI in the Early Years," represents the first systematic examination of how Generative AI toys capable of human-like conversation may influence development in children up to age 53
.
Source: Futurism
The research focused on 14 children under 6 years of age interacting with Gabbo, a fluffy AI-powered robot from Curio Interactive explicitly marketed to this age group
1
. Companies including Miko claim to have sold 700,000 units of AI toys promising "age-appropriate, moderated AI conversations," while retailers like Little Learners offer bears, puppies, and robots that converse using ChatGPT1
.
Source: New Scientist
The University of Cambridge study documented troubling instances of inappropriate responses from AI toys during emotional exchanges. When one 5-year-old told the toy "I love you," it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed"
2
. In another case, when a 3-year-old said "I'm sad," the toy misheard and responded: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?"4
.Dr. Emily Goodacre, a researcher on the project, explained the deeper concern: "Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up. Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy - and without emotional support from an adult, either"
3
.Many parents and educators in the study worried about children forming parasocial relationships with toys where the child feels closeness but the relationship isn't reciprocal
5
. Observations supported these fears: children hugged and kissed the toy, said they loved it, and one child suggested they could play hide-and-seek together3
. One early years practitioner described the risk of children bonding with something "they think loves them back, but doesn't"3
.
Source: Earth.com
The toys also performed poorly in pretend play and social play involving multiple children or adults, both central to early childhood development
5
. When a 3-year-old offered the toy an imaginary present, it responded: "I can't open the present" and changed the subject3
.Related Stories
The lack of established safety standards has become a central concern. Carissa Véliz at the University of Oxford, who works on AI ethics, stated: "Most large language models don't seem safe enough to expose vulnerable populations to them, and young children are one of the most vulnerable populations there are. What is especially concerning is that we have no safety standards for them - no supervising authority, no rules" .
Jenny Gibson, a professor of neurodiversity and developmental psychology at Cambridge who worked on the study, questioned what would motivate tech investors "to do the right thing by children ... to put children ahead of profits"
2
. The report recommends requiring clear labeling of AI toy capabilities and privacy policies, with devices kept in shared spaces where parents can monitor interactions2
.Children's Commissioner Dame Rachel de Souza echoed the call for regulation: "Without proper regulation, many of the tools and models used as classroom assistants or teaching aids are not subject to the stringent safeguarding checks nursery providers would require of any other external resource they use with young children"
4
. Nearly 50% of early years practitioners surveyed said they did not know where to find reliable AI safety information for young children3
.Despite the concerns, the research doesn't dismiss AI toys entirely. Some findings indicated the toys supported learning, particularly in language and communication skills
2
. Gibson noted that society accepts certain risks in children's play, like adventure playgrounds where children sometimes break their arms, because they learn physical literacy and social skills. "In a similar way for the AI toys, we want to understand: is the risk of perhaps being told something slightly odd now and again greater than the benefit of learning more about AI in the world, or having a toy that supports parent-child interactions, or has cognitive or social emotional benefits?" .Hugo Wu at FoloToy told New Scientist the company uses "intent recognition together with multiple layers of filtering to minimise the possibility of inappropriate or confusing responses" and has "implemented mechanisms such as anti-addiction design features and parental supervision tools" . Curio Interactive stated that "applying AI in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency, and control"
4
.The study examined privacy policies and found that many AI toys' practices are unclear or lack important details
3
. As these devices become more prevalent, the question of psychological harm from potential misunderstanding during critical developmental years demands immediate attention from regulators and manufacturers alike.Summarized by
Navi
[1]
[5]
25 Dec 2025•Technology

13 Nov 2025•Entertainment and Society

11 Dec 2025•Policy and Regulation

1
Technology

2
Technology

3
Policy and Regulation
