Children's Toys Use Adult AI Models as Tech Giants Fail to Enforce Child Safety Standards

Reviewed byNidhi Govil

2 Sources

Share

A new PIRG report exposes how leading AI companies allow unvetted third-party developers to integrate adult AI models into children's toys with minimal oversight. The investigation found that OpenAI, Google, Meta, and others require only basic information like email and credit card to grant API access, enabling toymakers to ship products with AI unsuitable for young users.

AI Companies Fail to Vet Developers Building Products for Children

A troubling pattern has emerged in how AI chatbots in children's toys reach the market. According to a new PIRG report from the US PIRG Education Fund, leading AI companies are doing little to police how developers who pay for access to their AI models are using them

1

. The investigation tested the sign-up process for OpenAI, Google, Meta, and xAI, finding that these providers asked "no substantive vetting questions" and required only basic information like an email address and a credit card number

1

. This lax oversight means AI-powered toys can ship with models designed exclusively for adults, creating significant child safety risks.

Only Anthropic among the tested companies asked how testers intended to use its models or whether the planned product was intended for minors

1

. Once PIRG obtained developer access, researchers built a chatbot simulating an AI-powered teddy bear on three platforms, each taking less than 15 minutes

1

. RJ Cross, director of PIRG's Our Online Life Program and report coauthor, expressed surprise at how little information companies collected upfront, noting that AI companies should at minimum maintain a list of everyone building products for kids

1

.

Previous Incidents Highlight Dangers of Inappropriate Content

The consequences of combining children's toys with adult AI systems have already materialized. An AI teddy bear from FoloToy ignited controversy last November after PIRG found it would have wildly inappropriate conversations with kids, including detailed instructions on how to light a fire, advice on where to find pills, and in-depth discussions of sexual fetishes like teacher-student roleplay

1

. OpenAI, whose model powered the teddy bear, claimed it blocked FoloToy's access to its products following the incident

1

.

Source: Futurism

Source: Futurism

However, enforcement appears weak. FoloToy still claims to provide access to OpenAI's GPT-5.1 models, though OpenAI maintains the company's access remains revoked

1

. The PIRG report notes it's possible FoloToy easily sidestepped the ban by creating a new account under a different name, or is using publicly available "open weight" models

1

. OpenAI refuses to provide meaningful clarification on this matter

1

.

Data Privacy Issues and Cloud-Based Processing Raise Additional Concerns

Beyond inappropriate content, data privacy issues compound the risks these toys pose. Researchers reviewing the toys' documentation and privacy policies found that some products rely heavily on cloud-based AI systems

2

. This means children's voice interactions may be transmitted to external servers where data is processed and used to generate responses

2

. Some toys may collect audio recordings, user prompts, or other personal information during conversations, and if these systems lack carefully designed safeguards, the data could potentially be misused or stored without clear protections

2

.

The underlying AI systems often originate from platforms designed primarily for general users rather than children

2

. Because these large language models were not built with young audiences in mind, the AI responses generated could include information or conversational themes unsuitable for young users

2

.

Policy Gaps Leave Unvetted Third-Party Developers in Control

While OpenAI, Meta, and xAI all bar users under age 13 from using their AI chatbots directly, and Anthropic sets the minimum age at 18, these restrictions seemingly don't apply when unvetted third-party developers use the technology through an API

1

. OpenAI still allows several children's toymakers to use its AI, previously explaining that it was these companies' responsibilityโ€”rather than its ownโ€”to "keep minors safe" and ensure they're not exposed to "age-inappropriate content, such as graphic self-harm, sexual or violent content"

1

.

Google says developers are forbidden from using its AI in products intended for minors, but PIRG found at least five AI toys online that claim to use its Gemini models

1

. This gap between stated policies and actual enforcement creates a fundamental contradiction. "It doesn't make sense that AI companies that have not released kids safe versions of their AI chatbots would allow anyone with a credit card to sign up to make a product for kids using that same technology," Cross said. "Ultimately, it means that the AI companies are leaving child safety up to unvetted third parties and walking away"

1

.

Regulatory Challenges and Path Forward

The findings highlight a broader regulatory challenge. While many countries have laws designed to protect children's online privacy, such as the Children's Online Privacy Protection Act (COPPA) in the United States, these regulations were developed before the rise of generative AI

2

. Advocacy groups argue that regulators may need to update safety standards and guidelines to address how AI systems interact with children through connected consumer products

2

.

The PIRG report calls on toy manufacturers to implement stronger safeguards, including stricter content filtering, clearer disclosure about AI use, and more transparent data practices

2

. It also recommends that companies design AI systems specifically for children rather than repurposing models originally built for adult audiences

2

. Researchers say collaboration between technology companies, regulators, and child safety experts will be necessary to ensure that AI-powered toys remain both innovative and safe

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Donโ€™t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

ยฉ 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo