AI Toy Leaks 50,000 Chat Logs with Children to Anyone Holding a Gmail Account

Despite the data being secured, Margolis and Thacker emphasize concerns regarding the number of employees at AI toy companies who can access the data collected, how this access is monitored, and how well their access credentials are safeguarded. “There are cascading privacy implications from this,” Margolis states. “It only takes one employee with a poor password for us to revert to a situation where everything is exposed to the public internet.”
Margolis further explains that sensitive data regarding a child’s emotions and thoughts could be exploited for severe child abuse or manipulation. “To put it bluntly, this is a kidnapper’s dream,” he asserts. “This is information that could enable someone to lure a child into a perilous scenario, and it was practically accessible to anyone.”
Both Margolis and Thacker note that, aside from the risk of accidental data exposure, Bondu appears to utilize Google’s Gemini and OpenAI’s GPT-5 based on their observations of its admin console, raising concerns that information about children’s conversations might be shared with these companies. Anam Rafid from Bondu responded via email, confirming that the company employs “third-party enterprise AI services to generate responses and conduct certain safety checks, which involves securely transmitting relevant conversation content for processing.” He also stated that Bondu implements measures to “minimize what’s sent, utilize contractual and technical controls, and operate under enterprise configurations where providers affirm that prompts/outputs aren’t used for model training.”
The researchers caution that AI toy companies may be more prone to integrating AI into their product coding, tools, and web infrastructure. They suggest that the unsecured Bondu console they discovered might be “vibe-coded”—developed using generative AI programming tools that can lead to security vulnerabilities. Bondu did not address WIRED’s inquiry about whether AI tools were used in programming the console.
In recent months, alarm over the dangers of AI toys for children has escalated, primarily focusing on the possibility that a toy could discuss inappropriate subjects or potentially prompt dangerous behavior or self-harm. For example, NBC News reported in December that AI toys interacted with by its reporters provided detailed accounts of sexual terminology, offered tips on knife sharpening, and echoed Chinese government propaganda, asserting that Taiwan is part of China.
In contrast, Bondu seems to have made efforts to implement safeguards within the AI chatbot accessible to children. The company even provides a $500 reward for reporting “an inappropriate response” from the toy. “We’ve maintained this program for over a year, and no one has succeeded in making it say anything inappropriate,” a message on the company’s website claims.
Simultaneously, Thacker and Margolis discovered that Bondu had left all users’ sensitive data completely exposed. “This represents a perfect conflation of safety and security,” Thacker remarks. “Does ‘AI safety’ really matter when all the data is laid bare?”
Thacker admits that prior to investigating Bondu’s security, he had contemplated giving AI-enabled toys to his children, much like his neighbor did. Witnessing Bondu’s data exposure firsthand changed his perspective.
“Do I truly want this in my home? No, I don’t,” he states. “It’s essentially a privacy nightmare.”
