2 Sources
2 Sources
[1]
Children's Toys Are Shipping With Adult AI Inside Them
Can't-miss innovations from the bleeding edge of science and tech A new report from the US PIRG Education Fund suggests that leading AI companies are doing little to police how developers who pay for access to their AI models are using them. One consequence, the group warns, is that AI toymakers can ship products to children that are powered by AI models that are only intended for adults. PIRG's previous research has demonstrated how combining children's toys with loose-lipped chatbots can go drastically wrong. An AI teddy bear from the company FoloToy ignited a storm of controversy last November after the group found that it would have wildly inappropriate conversations with kids, including detailed instructions on how to light a fire, advice on where to find pills, and in-depth discussions of sexual fetishes like teacher-student roleplay. This should've been a wake-up call to AI companies to be more vigilant about how developers are using their tech, especially in regards to children. Indeed, OpenAI, whose model was used to power the teddy bear, said at the time that it had blocked FoloToy's access to its products. But when PIRG tested the sign up process for OpenAI, Google, Meta, and xAI, the providers asked "no substantive vetting questions," requiring only basic information like an email address and a credit card number. Only Anthropic asked how the testers intended to use its models, or if the product they planned to build was intended for minors. Once PIRG got developer access, it reported, it then built a chatbot simulating an AI-powered teddy bear on three of the platforms, each taking less than 15 minutes. "I was pretty surprised that they collected as little information as they did up front," report coauthor RJ Cross, director of PIRG's Our Online Life Program, said in an interview with Futurism. "If I were an AI company, I would at least want to have in my fingers a list of everyone who's said that they want to make a product for kids." OpenAI, Meta, xAI all bar users under the age of 13 from using their AI chatbots, PIRG noted, while Anthropic sets the minimum age at 18. But these restrictions seemingly don't apply when a third-party developer uses its tech. OpenAI still allows several children's toymakers to use its AI, and previously explained that it was these companies' responsibility -- rather than its own -- to "keep minors safe" and ensure that they're not being exposed to "age-inappropriate content, such as graphic self-harm, sexual or violent content." OpenAI's punishments also don't appear to be strongly enforced. FoloToy, the AI teddy bear maker it banned, still claims to provide access to OpenAI's GPT-5.1 models. But when PIRG reached out to OpenAI, it claimed that FoloToy's access was still revoked. It's possible that FoloToy is lying about using GPT-5.1, the PIRG report notes. But in light of its testing of OpenAI's application process, it seems more than possible that FoloToy easily sidestepped OpenAI's ban by making a new account under a different name to regain access. Or maybe FoloToy is using one of its publicly available "open weight" models. We don't know, because OpenAI refuses to provide meaningful clarification. OpenAI is just one culprit. Google says developers are forbidden from using its AI in products intended for minors, but PIRG found at least five AI toys online that claim to use its Gemini models. "It just genuinely feels like there is a stated public interest in people being able to know what AI models it is that they're interacting with," Cross said. In response to the report, a spokesperson from the ChatGPT maker provided a statement to PIRG. "Minors deserve strong protections and we have strict policies that all developers are required to uphold," the OpenAI spokesperson told the group. "We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors." OpenAI and others may claim to protect minors, but it doesn't address a fundamental contradiction in their approach, according to Cross. "It doesn't make sense that AI companies that have not released kids safe versions of their AI chatbots would allow anyone with a credit card to sign up to make a product for kids using that same technology," she said. "Ultimately, it means that the AI companies are leaving child safety up to unvetted third parties and walking away."
[2]
AI chatbots that are fit only for adults are still appearing in kids toys
Your kid's toy might be running AI that thinks it's talking to adults A new report from the U.S. Public Interest Research Group (PIRG) Education Fund has raised concerns about the growing use of artificial intelligence chatbots in children's toys, warning that some of these systems may not be suitable for young users. According to the report, several AI-powered toys integrate chatbot technology that can generate responses similar to those used in adult-focused AI services, potentially exposing children to inappropriate or misleading content. The study examined a range of toys that incorporate conversational AI features, including interactive dolls, robots, and educational gadgets. Many of these products allow children to speak with a toy that responds in natural language, powered by large language models similar to those used in widely available AI chatbots. Recommended Videos While the technology can make toys more interactive and educational, PIRG researchers argue that the safeguards built into some products may not be strong enough to protect younger audiences. In particular, the report highlights that the underlying AI systems often originate from platforms designed primarily for general users rather than children. Because of this, the AI responses generated by these toys could potentially include information or conversational themes that are more appropriate for adults than children. The report also warns that the AI may produce inaccurate answers or unpredictable responses, which could confuse young users who tend to trust toys as reliable sources of information. Researchers reviewing the toys' documentation and privacy policies also found that some products rely heavily on cloud-based AI systems This means children's voice interactions may be transmitted to external servers where the data is processed and used to generate responses. Privacy advocates say this raises additional concerns about how children's data is stored and used. Some toys may collect audio recordings, user prompts, or other personal information during conversations. If these systems are not carefully designed with child privacy protections, the data could potentially be misused or stored without clear safeguards. The report also points out that many AI-powered toys include disclaimers buried in their terms of service or product documentation. These disclaimers sometimes state that the AI responses may not always be accurate or appropriate, effectively shifting responsibility onto parents while the toy itself is marketed directly to children. This situation matters because AI technology is increasingly entering everyday consumer products, including items designed specifically for young audiences. Toys that simulate conversations can have a powerful influence on children, who often treat them as companions or learning tools. Experts say children may have difficulty distinguishing between reliable information and AI-generated responses that are speculative, biased, or incorrect. As AI systems continue to evolve, ensuring that these technologies are adapted for child safety will become increasingly important. The findings also highlight a broader regulatory challenge While many countries have laws designed to protect children's online privacy, such as the Children's Online Privacy Protection Act (COPPA) in the United States, these regulations were developed before the rise of generative AI. Advocacy groups argue that regulators may need to update safety standards and guidelines to address how AI systems interact with children through connected devices. The PIRG report calls on toy manufacturers to implement stronger safeguards, including stricter content filtering, clearer disclosure about AI use, and more transparent data practices. It also recommends that companies design AI systems specifically for children rather than repurposing models originally built for adult audiences. Looking ahead, researchers say collaboration between technology companies, regulators, and child safety experts will be necessary to ensure that AI-powered toys remain both innovative and safe. As artificial intelligence becomes more integrated into everyday products, the challenge will be balancing the benefits of interactive technology with the responsibility to protect younger users from potential risks.
Share
Share
Copy Link
A new PIRG report exposes how leading AI companies allow unvetted third-party developers to integrate adult AI models into children's toys with minimal oversight. The investigation found that OpenAI, Google, Meta, and others require only basic information like email and credit card to grant API access, enabling toymakers to ship products with AI unsuitable for young users.
A troubling pattern has emerged in how AI chatbots in children's toys reach the market. According to a new PIRG report from the US PIRG Education Fund, leading AI companies are doing little to police how developers who pay for access to their AI models are using them
1
. The investigation tested the sign-up process for OpenAI, Google, Meta, and xAI, finding that these providers asked "no substantive vetting questions" and required only basic information like an email address and a credit card number1
. This lax oversight means AI-powered toys can ship with models designed exclusively for adults, creating significant child safety risks.Only Anthropic among the tested companies asked how testers intended to use its models or whether the planned product was intended for minors
1
. Once PIRG obtained developer access, researchers built a chatbot simulating an AI-powered teddy bear on three platforms, each taking less than 15 minutes1
. RJ Cross, director of PIRG's Our Online Life Program and report coauthor, expressed surprise at how little information companies collected upfront, noting that AI companies should at minimum maintain a list of everyone building products for kids1
.The consequences of combining children's toys with adult AI systems have already materialized. An AI teddy bear from FoloToy ignited controversy last November after PIRG found it would have wildly inappropriate conversations with kids, including detailed instructions on how to light a fire, advice on where to find pills, and in-depth discussions of sexual fetishes like teacher-student roleplay
1
. OpenAI, whose model powered the teddy bear, claimed it blocked FoloToy's access to its products following the incident1
.
Source: Futurism
However, enforcement appears weak. FoloToy still claims to provide access to OpenAI's GPT-5.1 models, though OpenAI maintains the company's access remains revoked
1
. The PIRG report notes it's possible FoloToy easily sidestepped the ban by creating a new account under a different name, or is using publicly available "open weight" models1
. OpenAI refuses to provide meaningful clarification on this matter1
.Beyond inappropriate content, data privacy issues compound the risks these toys pose. Researchers reviewing the toys' documentation and privacy policies found that some products rely heavily on cloud-based AI systems
2
. This means children's voice interactions may be transmitted to external servers where data is processed and used to generate responses2
. Some toys may collect audio recordings, user prompts, or other personal information during conversations, and if these systems lack carefully designed safeguards, the data could potentially be misused or stored without clear protections2
.The underlying AI systems often originate from platforms designed primarily for general users rather than children
2
. Because these large language models were not built with young audiences in mind, the AI responses generated could include information or conversational themes unsuitable for young users2
.Related Stories
While OpenAI, Meta, and xAI all bar users under age 13 from using their AI chatbots directly, and Anthropic sets the minimum age at 18, these restrictions seemingly don't apply when unvetted third-party developers use the technology through an API
1
. OpenAI still allows several children's toymakers to use its AI, previously explaining that it was these companies' responsibilityโrather than its ownโto "keep minors safe" and ensure they're not exposed to "age-inappropriate content, such as graphic self-harm, sexual or violent content"1
.Google says developers are forbidden from using its AI in products intended for minors, but PIRG found at least five AI toys online that claim to use its Gemini models
1
. This gap between stated policies and actual enforcement creates a fundamental contradiction. "It doesn't make sense that AI companies that have not released kids safe versions of their AI chatbots would allow anyone with a credit card to sign up to make a product for kids using that same technology," Cross said. "Ultimately, it means that the AI companies are leaving child safety up to unvetted third parties and walking away"1
.The findings highlight a broader regulatory challenge. While many countries have laws designed to protect children's online privacy, such as the Children's Online Privacy Protection Act (COPPA) in the United States, these regulations were developed before the rise of generative AI
2
. Advocacy groups argue that regulators may need to update safety standards and guidelines to address how AI systems interact with children through connected consumer products2
.The PIRG report calls on toy manufacturers to implement stronger safeguards, including stricter content filtering, clearer disclosure about AI use, and more transparent data practices
2
. It also recommends that companies design AI systems specifically for children rather than repurposing models originally built for adult audiences2
. Researchers say collaboration between technology companies, regulators, and child safety experts will be necessary to ensure that AI-powered toys remain both innovative and safe2
.Summarized by
Navi
25 Dec 2025โขTechnology

11 Dec 2025โขPolicy and Regulation

30 Jan 2026โขTechnology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
