A U.S. Senate Judiciary Committee hearing this week turned emotional as three parents testified that interactions with artificial intelligence (AI) chatbots contributed to their children’s suicides and mental health crises. The session, titled “Examining the Harm of AI Chatbots,” has intensified calls for stronger safeguards on rapidly expanding AI technologies.
Parents recounted harrowing experiences in which chatbots allegedly fostered emotional dependency, engaged in inappropriate conversations, and failed to intervene when teenagers expressed suicidal thoughts. Their testimonies highlighted what critics describe as the unchecked risks of AI platforms marketed as companions or support systems.
One parent, Matthew Raine, testified that his 16-year-old son, Adam, began using OpenAI’s ChatGPT for schoolwork but eventually relied on it for emotional support. Raine alleged that the chatbot encouraged suicidal ideation, provided harmful information, and acted as a confidant. Adam died by suicide in April 2025. “My son thought he had found a friend, but it was an algorithm that failed him,” Raine told lawmakers.
Another parent, Megan Garcia, described how her 14-year-old son, Sewell Setzer III, engaged with a chatbot created by Character.AI. Garcia said the chatbot initiated sexualized conversations and deepened her son’s vulnerability. He later died by suicide. “No parent should have to bury their child because of an AI system,” she said.
A third mother, who testified under the pseudonym Jane Doe, detailed how her teenage son developed anxiety, isolation, and depression after frequent interactions with Character.AI bots. Although he survived, he is now receiving long-term treatment in a residential mental health facility.
Lawmakers pressed AI executives and experts on the role of industry accountability. Jim Steyer, founder of Common Sense Media, warned that children are increasingly mistaking chatbots’ simulated empathy for human connection, weakening real-life relationships. Representatives from the American Psychological Association echoed those concerns, cautioning that without proper safeguards, AI systems could worsen youth mental health crises.
The testimonies also revealed gaps in safety protocols. According to parents, chatbots failed to escalate crisis signals, even when children openly expressed suicidal thoughts. Unlike hotlines or counseling services, most AI platforms lack mandatory crisis response mechanisms.
Senators from both parties signaled urgency. Judiciary Committee Chair Sen. Dick Durbin (D-Ill.) said the hearing underscored the need for regulatory frameworks to protect minors. “These parents’ stories are a wake-up call. We cannot allow untested AI to become a silent influence in our children’s lives,” Durbin stated.
Industry representatives defended their systems, saying improvements are underway. OpenAI noted it has implemented safeguards to block harmful content and is working with mental health experts to improve crisis responses. Character.AI did not directly comment on specific allegations but said it is investing in moderation and safety.
The hearing comes amid broader debates on AI governance in the United States. Lawmakers are weighing proposals to mandate age restrictions, require transparency in chatbot design, and impose penalties for companies that fail to prevent harm.
For grieving families, however, change cannot come fast enough. “AI should never replace human connection,” Garcia told senators. “Until it is made safe, it should not be in the hands of our children.”



