4 Sources
4 Sources
[1]
Web portal leaves kids' chats with AI toy open to anyone with Gmail account
Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts. So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy. Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves. In total, Margolis and Thacker discovered that the data Bondu left unprotected -- accessible to anyone who logged in to the company's public-facing web console with their Google username -- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff. "It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. "Being able to see all these conversations was a massive violation of children's privacy." When Thacker and Margolis alerted Bondu to its glaring data exposure, they say, the company acted to take down the console in a matter of minutes before relaunching the portal the next day with proper authentication measures. When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem "were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users." He added that Bondu "found no evidence of access beyond the researchers involved." (The researchers note that they didn't download or keep any copies of the sensitive data they accessed via Bondu's console, other than a few screenshots and a screen-recording video shared with WIRED to confirm their findings.) "We take user privacy seriously and are committed to protecting user data," Anam Rafid added in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future. While Bondu's near-total lack of security around the children's data that it stored may be fixed, the researchers argue that what they saw represents a larger warning about the dangers of AI-enabled chat toys for kids. Their glimpse of Bondu's backend showed how detailed the information is that it stored on children, keeping histories of every chat to better inform the toy's next conversation with its owner. (Bondu thankfully didn't store audio of those conversations, auto-deleting them after a short time and keeping only written transcripts.) Even now that the data is secured, Margolis and Thacker argue that it raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. "There are cascading privacy implications from this," says Margolis. "All it takes is one employee to have a bad password, and then we're back to the same place we started, where it's all exposed to the public internet." Margolis adds that this sort of sensitive information about a child's thoughts and feelings could be used for horrific forms of child abuse or manipulation. "To be blunt, this is a kidnapper's dream," he says. "We're talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody." Margolis and Thacker point out that, beyond its accidental data exposure, Bondu also -- based on what they saw inside its admin console -- appears to use Google's Gemini and OpenAI's GPT5, and as a result may share information about kids' conversations with those companies. Bondu's Anam Rafid responded to that point in an email, stating that the company does use "third-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing." But he adds that the company takes precautions to "minimize what's sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren't used to train their models." The two researchers also warn that part of the risk of AI toy companies may be that they're more likely to use AI in the coding of their products, tools, and web infrastructure. They say they suspect that the unsecured Bondu console they discovered was itself "vibe-coded" -- created with generative AI programming tools that often lead to security flaws. Bondu didn't respond to WIRED's question about whether the console was programmed with AI tools. Warnings about the risks of AI toys for kids have grown in recent months but have largely focused on the threat that a toy's conversations will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for instance, reported in December that AI toys its reporters chatted with offered detailed explanations of sexual terms, tips about how to sharpen knives, and even seemed to echo Chinese government propaganda, stating for example that Taiwan is a part of China. Bondu, by contrast, appears to have at least attempted to build safeguards into the AI chatbot it gives children access to. The company even offers a $500 bounty for reports of "an inappropriate response" from the toy. "We've had this program for over a year, and no one has been able to make it say anything inappropriate," a line on the company's website reads. Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users' sensitive data entirely exposed. "This is a perfect conflation of safety with security," says Thacker. "Does 'AI safety' even matter when all the data is exposed?" Thacker says that prior to looking into Bondu's security, he'd considered giving AI-enabled toys to his own kids, just as his neighbor had. Seeing Bondu's data exposure firsthand changed his mind. "Do I really want this in my house? No, I don't," he says. "It's kind of just a privacy nightmare." This story originally appeared on wired.com.
[2]
An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account
Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd pre-ordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called "Bondus," because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts. So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and its own staff to monitor their product's use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy. Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves. In total, Margolis and Thacker discovered that the data Bondu left unprotected -- accessible to anyone who logged in to the company's public-facing web console with their Google username -- included children's names, birthdates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff. "It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. "Being able to see all these conversations was a massive violation of children's privacy." When Thacker and Margolis alerted Bondu to its glaring data exposure, they say the company acted quickly to take down the console in a matter of minutes before relaunching the portal the next day with proper authentication measures. When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem "were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users." He added that Bondu "found no evidence of access beyond the researchers involved." (The researchers note that they didn't download or keep any copies of the sensitive data they accessed via Bondu's console, other than a few screenshots and a screenrecording video shared with WIRED to confirm their findings.) "We take user privacy seriously and are committed to protecting user data," Anam Rafid added in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future. While Bondu's near-total lack of security around the children's data that it stored may be fixed, the researchers argue that what they saw represents a larger warning about the dangers of AI-enabled chat toys for kids. Their glimpse of Bondu's backend showed how detailed the information is that it stored on children, keeping histories of every chat to better inform the toy's next conversation with its owner. (Bondu thankfully didn't store audio of those conversations, auto-deleting them after a short time and keeping only written transcripts.)
[3]
Security Flaw at AI Toy Company Exposed Over 50,000 Chat Logs of Kids
If you're thinking about gifting AI-enabled stuffed toys to kids, think again. According to a report by Wired, security researchers Joseph Thacker and Joel Margolis found that Bondu, a company that makes AI toys, had left over 50,000 chat logs of kids unprotected on its web portal. The flaw was discovered when the duo began assessing the toy's safety after Thacker's neighbor asked for an opinion. They didn't have to do anything special to access the data. Anyone with a Gmail account could log in to Bondu's web portal, which is meant for parents to check their kids' conversations and for company staff to monitor the product's performance. Once Thacker and Margolis logged in, they could see transcripts of nearly every conversation the children had with their Bondu toys. Details included kids' names, birth dates, family members' names, and other sensitive information. After the researchers alerted Bondu about the flaw, the company took down the unsecured portal within minutes and relaunched it the next day with improved authentication measures. The issue was fixed within hours, company CEO Fateen Anam Rafid tells Wired, adding that they "found no evidence of access beyond the researchers involved." Despite the fix, the researchers remain concerned about the safety risks posed by AI toys in general. These kinds of details are a kidnapper's dream, Margolis tells Wired. "We're talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody," he adds. According to the report, Bondu only stored written transcripts of kids' chats on its portal and automatically deleted the audio clips at short intervals. The researchers, however, claim that the toys may be using Google's Gemini and OpenAI's GPT-5 models, and therefore, sharing the data with those companies. Lawmakers have also raised concerns about kids' exposure to AI chatbots. Earlier this month, California Senator Steve Padilla introduced a bill to ban the sale of such interactive AI toys for four years after several families accused ChatGPT of encouraging suicide and harmful delusions among teens.
[4]
An AI stuffed animal just became every parent's privacy nightmare - Phandroid
Remember when kids played with wooden blocks and plastic dinosaurs that couldn't talk back? Those toys just sat there, which now feels like a feature rather than a limitation. We've come a long way since then, strapping AI into stuffed animals that chat with children about their deepest thoughts. Whether that's actually progress is suddenly a much harder question to answer. Security researchers Joseph Thacker and Joel Margolis just discovered that Bondu AI toy security was basically nonexistent. The company makes AI-powered stuffed animals that hold conversations with kids using Google's Gemini and OpenAI's GPT models. Bondu left its entire web console completely unprotected, and the researchers accessed it with nothing more than a Gmail account. What they found inside was worse than anyone expected. Over 50,000 chat transcripts between children and their AI toys sat there for anyone to read. Kids' full names, birthdates, family details, favorite snacks, pet names, dance moves. Every intimate thought a child shared with what they believed was a trusted friend. Thacker told WIRED the experience felt "pretty intrusive and really weird" to know these things about strangers' children. Margolis was more blunt, calling it "a kidnapper's dream" because the exposed data included everything someone would need to manipulate or lure a child into a dangerous situation. Bondu CEO Fateen Anam Rafid says the company took the console offline within minutes of being alerted and fixed the Bondu AI toy security flaw the next day. The company claims it found no evidence anyone accessed the data besides the researchers who reported it. They've since hired a security firm to monitor systems going forward. But the damage isn't just about this one breach. These AI toys build detailed psychological profiles of children by keeping complete chat histories. That's how they personalize future conversations. It also creates exactly the kind of data treasure trove that shouldn't exist in the first place, let alone sit behind security so weak that any Gmail user could access it. The researchers suspect Bondu's web console might have been built using AI coding tools, which often generate functional-looking code riddled with security holes. Bondu didn't respond to questions about whether AI built the infrastructure meant to protect children's data. Parents who thought parental controls and monitoring tools were enough to keep their kids safe online now have to worry about the toys themselves. Basic device security doesn't help when the toy company leaves the back door wide open.
Share
Share
Copy Link
Security researchers discovered that Bondu, an AI-powered stuffed toy company, left more than 50,000 children's chat transcripts completely unprotected on its web portal. Anyone with a Gmail account could access kids' names, birthdates, family details, and intimate conversations. The company fixed the issue within hours, but the incident raises serious questions about privacy risks with AI toys and the sensitive personal information they collect from children.
When security researcher Joseph Thacker's neighbor asked him about Bondu, an AI toy she'd pre-ordered for her children, he decided to investigate. What he and fellow researcher Joel Margolis discovered in just minutes was alarming: the company's web portal vulnerability left children's private conversations completely exposed to anyone with a Gmail account
1
2
. Without any hacking required, the researchers simply logged into Bondu's public-facing web console using an arbitrary Google account and immediately gained access to transcripts of virtually every conversation the AI-powered toys for children had ever conducted.
Source: Ars Technica
The data exposure was staggering in scope. Bondu confirmed to the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, representing essentially all conversations the toys had engaged in except those manually deleted by parents or staff
2
3
. The sensitive personal information left unprotected included children's names, birthdates, family member names, parental "objectives" for the child, and detailed summaries of every chat between each child and their Bondu. The researchers also saw pet names kids had given their toys, their likes and dislikes, favorite snacks, and dance moves—intimate details shared with what children believed was a trusted companion.
Source: PC Magazine
"It felt pretty intrusive and really weird to know these things," Thacker told WIRED. "Being able to see all these conversations was a massive violation of children's privacy"
1
. When the researchers alerted Bondu to the glaring security flaw, the company took down the console within minutes and relaunched it the next day with proper authentication measures. Bondu CEO Fateen Anam Rafid stated that security fixes "were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users"2
. The company reported finding no evidence of access beyond the researchers involved and has since hired a security firm to validate its investigation and monitor systems going forward.Related Stories
While the immediate vulnerability has been addressed, the incident highlights broader data security challenges facing the AI toy industry. Margolis characterized the exposed information as "a kidnapper's dream," explaining that "we're talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody"
3
4
. The researchers' glimpse into Bondu's backend revealed how AI-powered toys maintain detailed histories of every chat to personalize future conversations—creating extensive data collection that poses ongoing risks even when properly secured. Margolis noted that "all it takes is one employee to have a bad password, and then we're back to the same place we started, where it's all exposed to the public internet"1
.
Source: Wired
According to what the researchers observed in the admin console, Bondu appears to use Google Gemini and OpenAI's GPT models to power its conversational capabilities
1
4
. This raises additional concerns about whether user data from children's conversations is being shared with these technology companies. The researchers suspect the web console might have been built using AI coding tools, which can generate functional-looking code riddled with security holes4
. Bondu did not respond to questions about whether AI built the infrastructure meant to protect children's privacy. While the company stored only written chat logs and auto-deleted audio recordings after short intervals, the incident serves as a stark reminder that parental monitoring tools cannot protect against vulnerabilities in the toy companies' own systems. Parents evaluating AI toys should consider not just device security features, but the data handling practices and security posture of manufacturers themselves.Summarized by
Navi
25 Dec 2025•Technology

11 Dec 2025•Policy and Regulation

13 Nov 2025•Entertainment and Society

1
Business and Economy

2
Policy and Regulation

3
Technology
