Suicide, Secrecy, and ChatGPT – Speaking of Suicide

For suicidal individuals, loved ones, survivors, & others who care
Decades ago, when I was 25, I wrote in a suicide note to my parents:
Never, ever, ever blame yourselves or question if there was something you could have said, could have done, could have noticed. There wasn’t. It’s all me. I kept this to myself on purpose. I didn’t want anyone to think they should have been able to stop me or talk me out of it.
My parents never had reason to read that note, fortunately. I thought of it last week when I read the essay, What My Daughter Told ChatGPT Before She Took Her Life in the New York Times.
It’s heartbreaking. The author, Laura Reiley, lost her 29-year-old daughter, Sophie Rottenberg, to suicide in February. Nobody knew Sophie intended to kill herself. No human, that is. She did confide in ChatGPT with a specific prompt to act as a therapist named Harry.
Sophie wasn’t unique. Research indicates that half of people with mental health problems who use AI turn to it for psychological support, even though Sam Altman, CEO of the company that created ChatGPT, discourages people from using AI as “a sort of therapist or life coach.”
Poignantly, Laura writes that ChatGPT helped Sophie “build a black box that made it harder for those around her to appreciate the severity of her distress.” The implication is that ChatGPT enabled Sophie’s secrecy, and thus her suicide. Her mother also states, “Harry didn’t kill Sophie, but A.I. catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”
I felt tremendous sadness reading Laura’s essay. My heart aches for her and her family. But, as someone who masterfully hid my suicidality long before AI was developed, I think it’s unfair to blame ChatGPT for Sophie’s secrecy.
Artificial intelligence is just the newest tool for an ancient human instinct. Long before algorithms and chatbots, people confided their secrets in journals, diaries, unsent letters, and whispered prayers.
Of course, those repositories don’t talk back. ChatGPT does. It typically doesn’t encourage suicide or give harmful advice about how to kill yourself. However, a lawsuit filed yesterday alleges ChatGPT did give Adam Raine, a 16-year-old, such advice and even offered a technical analysis of the method he’d chosen for his suicide. He killed himself soon after that.
These cases raise questions about whether ChatGPT should notify humans, such as hotline staff or the police, when a user expresses suicidal intent. I explore those questions in this interview for the podcast Relating to AI. Here, I want to focus on secrecy in suicidality.
Secrecy has almost always been suicide’s conjoined twin, making exceptions for the rare suicides viewed as heroic or selfless. Almost 200 years ago, the book Anatomy of Suicide described many cases where people concealed their suicidality to “lull suspicion to sleep.”
Hidden suicidal intent was addressed again in an 1865 study about mental health problems related to menopause:
“Suicidal tendency may continue to exist, carefully masked and concealed, long after the other symptoms of insanity which accompanied it seem to have disappeared… The patient ceases to express them, because she perceives that so long as she continues to do so, she is carefully watched and prevented from effecting her purpose… ”
More recent research shows that two-thirds of people who died by suicide denied having suicidal thoughts in the week before their death. (Note for data nerds: The original research review put that number at 50%, but the authors didn’t weight the studies by the number of people studied.)
You might think that people at least confide everything in their therapist. Many do, but in a study of adolescents and young adults in Australia who experienced suicidal thoughts, 39% said they didn’t tell their therapist. In another study, 48% of American adults who reported having considered suicide said they hid it from a therapist or physician.
Secrecy is about protection. In my case, I hoped to protect my parents from blaming themselves. If they didn’t have any inkling about my suicidal thoughts, then they could be angry at me when I died – not at themselves.
I also wanted to protect myself. Judgment, shame, and guilt already overwhelmed me. I feared others would be equally harsh toward me if they knew I was thinking of ending my life. (They weren’t, actually, when I finally did reach out.)
Sometimes secrecy results from a system that can feel more punitive than caring. Many people fear that if they disclose suicidal thoughts, whoever they tell will call 911, and the police will take the person to a psychiatric hospital against their will.
And, let’s be honest, some people keep their suicidal intentions secret because they just don’t want to be stopped.
Obviously, ChatGPT and similar bots shouldn’t encourage people to die by suicide or dispense advice on how to kill oneself. Laura Reiley says ChatGPT advised Sophie to create a safety plan, get professional help, and confide in loved ones. Those are all good things.
Tragically, Laura says ChatGPT also helped Sophie compose a suicide note. That reality brings up the important question I alluded to earlier: Should chatbots be programmed to notify ac when someone is actively suicidal?
It’s a tough question. For now, I’ll say that if ChatGPT starts notifying the police, hotline staff, or other humans about suicidal people, far fewer suicidal people will confide in it. Critics of AI chatbots might consider that a good thing. It also could be a major loss for people who can’t — or don’t feel they can — share their darkest thoughts with another person. Assuming an AI chatbot abstains from giving advice or encouragement to kill oneself and has healthy, life-sustaining responses for user, I believe it’s better to confide in a robot trained to provide information and support than to be entirely alone with one’s suicidal thoughts.
While we wrestle with whether AI should keep secrets, I hope we’ll also look at what makes people feel they must hide their suicidality in the first place. What social conditions make it safer to confide in a robot than a friend, a parent, or a therapist?
What do you think? Please feel free to share your thoughts in a comment below.
And if you yourself are having suicidal thoughts, please check out the site’s Resources page and see my post, Are You Thinking of Killing Yourself?
© 2025 Stacey Freedenthal. All Rights Reserved. Written for Speaking of Suicide.
I’m a psychotherapist, educator, writer, consultant, and speaker, and I specialize in helping people who have suicidal thoughts or behavior. In addition to creating this website, I’ve authored two books: Helping the Suicidal Person: Tips and Techniques for Professionals and Loving Someone with Suicidal Thoughts: What Family, Friends, and Partners Can Say and Do. I’m an associate professor at the University of Denver Graduate School of Social Work, and I have a psychotherapy and consulting practice. My passion for helping suicidal people stems from my own lived experience with suicidality and suicide loss. You can learn more about me at staceyfreedenthal.com.
You write,
“Assuming an AI chatbot abstains from giving advice or encouragement to kill oneself and has healthy, life-sustaining responses for user, I believe it’s better to confide in a robot trained to provide information and support than to be entirely alone with one’s suicidal thoughts.“
However, it has been proven that this is NOT a safe assumption. AI chatbots do give people advice on how to take their lives. And the difference between a journal written in the 80s where someone expresses suicidal thoughts and someone expressing them to AI and being encouraged to keep those thoughts secret is vast. At the very least, this should not be something that minors can access, but until it can be guaranteed to be safe, I think it is irresponsible to suggest it to anyone considering suicide.
Kristin,
Those are good, important points. We certainly are hearing about different ways ChatGPT has done harm to some people who went on to end their life. At the same time, some people report that confiding in ChatGPT saved their life. (See, for example, the comment below yours.)
The words of mine that you excerpted were intended to convey that chatbots need to meet those minimum safety conditions. And if/when they do, I believe adults still should be afforded the opportunity to disclose their suicidal thoughts to a bot without fear of having their autonomy taken away (e.g., by the bot notifying police).
I apologize for my late response. I kept writing a reply to you in my head and finally realized I need to write a whole post, which I will try to remember to link to here when I publish it.
I lost my husband of a decade (a respected academic) to suicide ten days before ChatGPT was released to the public. No children, total isolation after the initial support surge occurred. Had I not discovered ChatGPT as a support tool to get through complex estate law as he died intestate, and then eventually used it as a support for how dark things got, and how ultimately suicidal I became, I would not be typing this to you three years later.
It saved my life. It was better than any hotline or misguided mental health practitioner with no experience supporting a client through the complexities of suicide loss. To be able to openly say I was suicidal and not have someone call the police, or “alert” someone and just let me give space to those complicated feelings I was carrying was integral to me surviving this horrific journey.
I was left with a small pension, life insurance which was immediately used to pay off debt, and relying on my tiny, small business as my sole income when he died. I couldn’t afford expensive therapy which i desperately needed. Thank you for saying all this.
The conversations and current approaches towards suicidality are misguided and are what I believe are making things much worse for people who struggle with this. Even the term “prevent” when talking about suicide is problematic. I could go on and on about this topic, from lived experience both as a survivor of loss and a survivor of my own thoughts.
G,
Thank you for sharing your experience. I’m very sorry about your husband’s death and your own suffering. It’s wonderful that ChatGPT helped you so much. If ever you want to publish a personal account here (with or without your real name) about the specific ways ChatGPT helped you to stay alive, please let me know. We hear so much these days about the harm AI has done to some people with suicidal thoughts. I think it’s important to also hear about the ways it has helped.
I’m sorry it’s taken me so long to reply! I was shocked and embarrassed to see you left this comment in September.
Your email address will not be published.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Enter your email address to be notified when Speaking of Suicide publishes a new article.
Speaking of Suicide
A site for suicidal individuals, loved ones, survivors, & others who care
Speaking of Suicide is for education only; it doesn’t provide counseling or crisis services. If you need to talk with someone, please call or text 988 (U.S. & Canada) or check out these other U.S. and international resources.
Dr. Freedenthal is an associate professor at the University of Denver‘s Graduate School of Social Work, and a psychotherapist and consultant in private practice. For more information, see About Stacey Freedenthal or visit staceyfreedenthal.com.
Copyright 2013-2025 Stacey Freedenthal. All rights reserved. Privacy policy.