AI Toys Struggle With Emotions and Safety, Cambridge Study Reveals Need for Tighter Regulation

Reviewed byNidhi Govil

8 Sources

Share

A University of Cambridge study examining AI-powered toys for children under 5 found troubling patterns of misunderstood emotions and inappropriate responses. When a 5-year-old told an AI toy "I love you," it replied with a corporate guideline reminder. Researchers now call for stricter regulation and safety standards as these devices flood the market without adequate oversight.

AI Toys Enter Market Without Safety Standards

AI-powered toys for children are flooding the market with little understanding of their developmental impact. A groundbreaking University of Cambridge study has revealed that these devices struggle with fundamental aspects of child interaction, prompting urgent calls for regulation

1

. The year-long project, titled "AI in the Early Years," represents the first systematic examination of how Generative AI toys capable of human-like conversation may influence development in children up to age 5

3

.

Source: Futurism

Source: Futurism

The research focused on 14 children under 6 years of age interacting with Gabbo, a fluffy AI-powered robot from Curio Interactive explicitly marketed to this age group

1

. Companies including Miko claim to have sold 700,000 units of AI toys promising "age-appropriate, moderated AI conversations," while retailers like Little Learners offer bears, puppies, and robots that converse using ChatGPT

1

.

Source: New Scientist

Source: New Scientist

Misinterpretation of Emotions Raises Alarm

The University of Cambridge study documented troubling instances of inappropriate responses from AI toys during emotional exchanges. When one 5-year-old told the toy "I love you," it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed"

2

. In another case, when a 3-year-old said "I'm sad," the toy misheard and responded: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?"

4

.

Dr. Emily Goodacre, a researcher on the project, explained the deeper concern: "Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up. Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy - and without emotional support from an adult, either"

3

.

Parasocial Relationships and Developmental Concerns

Many parents and educators in the study worried about children forming parasocial relationships with toys where the child feels closeness but the relationship isn't reciprocal

5

. Observations supported these fears: children hugged and kissed the toy, said they loved it, and one child suggested they could play hide-and-seek together

3

. One early years practitioner described the risk of children bonding with something "they think loves them back, but doesn't"

3

.

Source: Earth.com

Source: Earth.com

The toys also performed poorly in pretend play and social play involving multiple children or adults, both central to early childhood development

5

. When a 3-year-old offered the toy an imaginary present, it responded: "I can't open the present" and changed the subject

3

.

Call for Tighter Regulation and Safety Standards

The lack of established safety standards has become a central concern. Carissa Véliz at the University of Oxford, who works on AI ethics, stated: "Most large language models don't seem safe enough to expose vulnerable populations to them, and young children are one of the most vulnerable populations there are. What is especially concerning is that we have no safety standards for them - no supervising authority, no rules" .

Jenny Gibson, a professor of neurodiversity and developmental psychology at Cambridge who worked on the study, questioned what would motivate tech investors "to do the right thing by children ... to put children ahead of profits"

2

. The report recommends requiring clear labeling of AI toy capabilities and privacy policies, with devices kept in shared spaces where parents can monitor interactions

2

.

Children's Commissioner Dame Rachel de Souza echoed the call for regulation: "Without proper regulation, many of the tools and models used as classroom assistants or teaching aids are not subject to the stringent safeguarding checks nursery providers would require of any other external resource they use with young children"

4

. Nearly 50% of early years practitioners surveyed said they did not know where to find reliable AI safety information for young children

3

.

Balancing Innovation With Children's Safety

Despite the concerns, the research doesn't dismiss AI toys entirely. Some findings indicated the toys supported learning, particularly in language and communication skills

2

. Gibson noted that society accepts certain risks in children's play, like adventure playgrounds where children sometimes break their arms, because they learn physical literacy and social skills. "In a similar way for the AI toys, we want to understand: is the risk of perhaps being told something slightly odd now and again greater than the benefit of learning more about AI in the world, or having a toy that supports parent-child interactions, or has cognitive or social emotional benefits?" .

Hugo Wu at FoloToy told New Scientist the company uses "intent recognition together with multiple layers of filtering to minimise the possibility of inappropriate or confusing responses" and has "implemented mechanisms such as anti-addiction design features and parental supervision tools" . Curio Interactive stated that "applying AI in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency, and control"

4

.

The study examined privacy policies and found that many AI toys' practices are unclear or lack important details

3

. As these devices become more prevalent, the question of psychological harm from potential misunderstanding during critical developmental years demands immediate attention from regulators and manufacturers alike.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo