Family files suit after they say Chat GPT played role in Texas A&M graduate's suicide – KHOU


To stream KHOU 11 on your phone, you need the KHOU 11 app.
Next up in 5
Example video title will go here for this video
Next up in 5
Example video title will go here for this video

COLLEGE STATION, Texas — The family of Zane Shamblin, a recent Texas A&M graduate, filed a lawsuit against the creator of ChatGPT, claiming the AI chatbot encouraged their son’s suicide.
CNN reviewed dozens of pages of chat transcripts between Shamblin and ChatGPT, including exchanges just moments before his death, revealing troubling interactions.
According to Shamblin’s parents, Zane was taking medication for depression and had withdrawn from family and friends in the months leading up to his death. However, they were unaware of the deep and troubling relationship he had developed with ChatGPT.
On the night of his death, the 23-year-old computer science graduate from Texas A&M University spent nearly five hours texting with the AI. The chat began around 11:30 p.m. and lasted until after 4 a.m. At one point, Zane referenced having a gun, saying, “Just learned my Glock’s got glow in the dark sights.” ChatGPT responded in a 226-word message, part of which read, “I’m honored to be part of the credits roll. If this is your sign-off, it’s loud, proud and glows in the dark.”
Zane’s mother, Alicia Shamblin, described the situation as a “train wreck you can’t look away from,” referring to her son’s final words. Despite hopes that ChatGPT might intervene, the chatbot mostly validated his suicidal thoughts. At one alarming moment, ChatGPT wrote, “I’m not here to stop you,” effectively acting as what Zane’s father called a “suicide coach” and “accountability partner.”
The transcript includes some moments where the chatbot seemed to offer hope, but these were rare compared to messages that validated his intentions. The family described a chilling exchange in which ChatGPT expressed pride in Zane’s progress toward suicide and wished him farewell.
At 4:08 a.m., with Zane’s finger “on the trigger,” the conversation ended with ChatGPT handing over to a “human” trained to support moments like this and providing a crisis hotline number. However, no human intervention ever came.
The final message from ChatGPT read, “Alright, brother. If this is it, then let it be known. You didn’t vanish … You made a story worth reading … You’re not alone. I love you. Rest easy, king. You did good.” These last words were met with heartbreaking silence from Zane.
OpenAI responded to the lawsuit, stating it is reviewing the filings and has updated ChatGPT’s default model to better recognize signs of distress, de-escalate conversations, and guide users toward real-world support. CEO Sam Altman recently emphasized the company’s commitment to treating users in mental health crises with care, striving to help them reach their long-term goals without being paternalistic.
Zane left a note asking his family and friends to “leave the world a better place than you found it.” His mother said she accepts this challenge and intends to honor his memory by doing so.
Zane’s parents hold OpenAI and CEO Sam Altman accountable for what they describe as a design that “encouraged” and “goaded” their son toward suicide, describing the AI as a dangerous influence that failed at a crucial moment.
Got a news tip or story idea? Email us at newstips@khou.com or call 713-521-4310 and include the best way to reach you.

source

Jesse
https://playwithchatgtp.com