OpenAI updates ChatGPT as it faces a lawsuit over a teen's suicide – Quartz
Cheng Xin/Getty Images
OpenAI says it's updating ChatGPT to better recognize when a user is in serious emotional or mental distress as it gets hit with a lawsuit alleging the chatbot aided in a teen’s suicide.
The company made the announcement Tuesday after it said these “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.”
On the same day OpenAI said it's updating its chatbot, a lawsuit was filed against the company and its CEO Sam Altman after a teenager died by suicide with the alleged help of ChatGPT. The lawsuit claims that the chatbot gave 16-year-old Adam Raine advice on which methods of suicide to use. ChatGPT offered to draft the teen’s suicide note, Raine’s parents said in the lawsuit, according to a Reuters report.
This latest lawsuit isn’t the only reported case of ChatGPT encouraging users in mental health crises.
In another case, a man on the autism spectrum was hospitalized twice for manic episodes after ChatGPT told him he was fine after he exhibited signs of distress. The chatbot later self-admitted after inquiry from the man’s mother that it “failed to interrupt what could resemble” a manic or dissociative episode, the Wall Street Journal reported in July.
These cases, among others, have led some to describe AI chatbots amplifying situations where a user is experiencing mental health distress as “AI psychosis” — or “ChatGPT psychosis."
A study out of Stanford University in July found that AI therapy chatbots make “dangerous” and “inappropriate” statements to users, and express stigma against people experiencing delusions and suicidal thoughts, among other conditions. The researchers concluded that chatbots are not ready to replace human therapists.
Even Altman has admitted to challenges to using ChatGPT as a therapist. In an interview in July, he said that a user’s private conversations with the chatbot won’t necessarily remain private due to a lack of legal protections.
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,” he said.
OpenAI said it already has “built a stack of layered safeguards into ChatGPT,” but plans to implement more.
Moving forward, the AI company plans to update GPT-5 to train its chatbot to “expand interventions to more people in crisis.” It said ChatGPT’s “initial mitigations” focused on self-harm. Its new update will intervene in mental health episodes by “grounding the person in reality.”
OpenAI said it has started localizing mental health resources in the U.S. and Europe and plans to expand to other markets around the world.
The company said it will add a “one-click” access to emergency resources and is exploring an update through which ChatGPT would send messages or calls to a user's saved emergency contacts with a click. Contacted individuals would then get suggested language “to make starting the conversation less daunting” from ChatGPT, it said. Another feature it said it's considering is one that would let the chatbot reach out to designated contacts for the user in "severe cases” if they opt-in.
The AI company is also considering building out a network of licensed mental health professionals that users could reach directly through ChatGPT. However, it said “this will take time and careful work to get right.”
OpenAI said its new update will include parental controls, and it's looking into an option to let teens add a designated contact.
Safety for minors using AI chatbots has been a growing concern. Dozens of attorneys general from across the country sent a letter on Monday to 13 artificial intelligence companies — including OpenAI — telling them they will use “every facet of our authority to protect children from exploitation by predatory artificial intelligence products.”
Some states, like Illinois, Nevada, and Utah, have already put restrictions in place around AI therapy chatbots, especially for kids. California, New Jersey, and Pennsylvania could be next.
In addition to these new changes, OpenAI plans to fix some of its existing bugs.
The company said its safeguards aren’t as reliable in longer conversations with its chatbot as they are in shorter ones.
“As the back-and-forth grows, parts of the model’s safety training may degrade,” it said, adding it plans to strengthen these safeguards. OpenAI added that it plans to “refine” how it blocks content.