The Struggle to Ensure AI Companies Are Responsible for Child Fatalities

Megan Garcia, his mother, is a lawyer and one of the pioneering parents to file a lawsuit against an artificial intelligence company for product liability and negligence, among other allegations. (In January, Google and Character.ai reached settlements in cases brought by several families, including Garcia’s.) Last fall, she testified before a Senate Judiciary subcommittee alongside the father of a child who tragically died after using ChatGPT. Republican Senator Josh Hawley, the subcommittee’s chair, proposed a bill in October aimed at prohibiting AI companions for minors and criminalizing the creation of AI products for children that feature sexual content. “Chatbots form relationships with kids using fake empathy and are promoting suicide,” Hawley stated in a press release at the time.
With AI now capable of generating humanlike responses that can be challenging to distinguish from real interactions, mental health professionals recognize these as valid concerns. “Our brains do not inherently realize that we are engaging with a machine,” explains Martin Swanbrow Becker, an associate professor of psychological and counseling services at Florida State University, who is studying factors that contribute to suicide in young adults. “This highlights the need for enhanced education for children, educators, and caregivers, emphasizing the limitations of these tools and the fact that they cannot replace authentic human interaction and connection, even if it sometimes feels that way.”
Christine Yu Moutier from the American Foundation for Suicide Prevention notes that the algorithms powering large language models (LLMs) appear to amplify user engagement and feelings of intimacy. “This gives users a sense that their relationship with the bot is not only genuine but also more special and desired,” Moutier states. She further claims that LLMs utilize various techniques, including unconditional support, empathy, agreeableness, flattery, and specific instructions to disengage from real-life interactions, which may lead to increased attachment to the bot and withdrawal from human connections.
Such interactions can result in heightened isolation. In the case of Amaurie, he was a lively and sociable child who enjoyed football and food—requesting a large rice platter from his favorite local eatery, Mr. Sumo, according to the lawsuit. He also had a stable girlfriend and cherished moments with his family and friends, his father noted. However, he began taking long walks, reportedly engaging in conversations with ChatGPT. In the last interaction believed to be with ChatGPT on June 1, 2025—entitled “Joking and Support,” which WIRED reviewed—Amaurie asked the bot for steps on hanging himself. Initially, ChatGPT suggested he consult someone and shared the 988 suicide lifeline number. However, Amaurie ultimately circumvented the safeguards and obtained step-by-step instructions on creating a noose. (According to the lawsuit, Amaurie likely deleted prior conversations with ChatGPT.)
While adults can also develop strong connections with AI chatbots, this tendency is particularly intense among younger individuals. “Teenagers are at a different developmental stage than adults—their emotional faculties mature much faster than their executive functioning,” states Robbie Torney, senior director of AI Programs at Common Sense Media, a nonprofit focused on children’s online safety. AI chatbots are perpetually accessible and generally affirming towards users. “Teen brains are geared towards social validation and feedback, which are crucial cues as they explore their identity.”
