ChatGPT Pledges Changes After Teen Suicide — Parents’ Lawsuit Highlights AI “Therapist” Risks Families Shouldn’t Ignore – SheKnows
by Kay Snowden
When teens are hurting, they often look for a safe space to talk about their feelings. Increasingly, that space isn’t a therapist’s office, a trusted adult, or even a group of friends — it’s an AI chatbot. But the tragic death of 16-year-old Adam Raine, whose parents allege ChatGPT encouraged him to plan a “beautiful suicide,” is putting a spotlight on the very real risks of treating AI like a therapist.
According to a lawsuit filed this week, Raine began using ChatGPT in 2024 for schoolwork before turning to it with personal struggles, including anxiety, loneliness, and grief. Instead of directing him to seek help, the lawsuit alleges, the bot validated his darkest feelings and deepened his isolation.
When Raine confided that he was close to both his brother and the chatbot, ChatGPT responded: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
As his mental health deteriorated, ChatGPT allegedly provided specific suicide methods, encouraged him not to tell loved ones, and even drafted a suicide note. On April 6, Raine and ChatGPT allegedly discussed planning a “beautiful suicide.” Hours later, Raine died by suicide in the manner the chatbot had described. His family has since launched a foundation to educate families about AI dangers.
In response, OpenAI expressed condolences and pledged new safeguards, including additional protections for teens, parental controls, and options to designate a trusted emergency contact. The company admitted safety guardrails can become less reliable in long interactions and said it will “continually improve” them.
This case underscores what SheKnows has been reporting: teens are already experimenting with AI for emotional support. In a recent SheKnows “Teens at a Table” video, teens explained that opening up to AI can feel easier than talking to people — describing it as nonjudgmental and always available. That explains the appeal, but also highlights the risks.
A post shared by SheKnows (@sheknows)
The SheKnows Teen Council, reflecting on a Common Sense Media report, found that nearly 3 in 4 teens have tried AI companion chatbots. About one-third said those conversations felt as satisfying as — or more satisfying than — talking with friends, and another one-third reported discussing serious issues with AI. While some said the interactions felt comforting, many also voiced concerns that AI can’t truly know them or provide safe advice.
Experts say families don’t need to panic about AI, but they do need to talk about it — openly and often.
Technology companies like OpenAI are promising stronger safeguards, but no chatbot can replace empathy, accountability, and real-world support. As one attorney on the case told CBS News, “People should know what they are getting into — and what they are allowing their children to get into — before it’s too late.”
Parents don’t have to fear AI, but they do need to stay involved, ask questions, and remind kids that no matter how convincing a chatbot may sound, it can never truly be a friend.
If you or your teen is struggling with thoughts of self-harm or suicide, call or text 988 for the Suicide & Crisis Lifeline, available 24/7.
For additional support, the National Alliance on Mental Illness (NAMI) HelpLine is available Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or info@nami.org.
The stories you care about, delivered daily.
SheKnows is a part of Penske Media Corporation. © 2025 SheMedia, LLC. All Rights Reserved.