OpenAI installs parental controls following California teen’s death – Los Angeles Times
Voice comes from the use of AI. Please report any issues or inconsistencies here.
Weeks after a Rancho Santa Margarita family sued over ChatGPT’s role in their teenager’s death, OpenAI has announced that parental controls are coming to the company’s generative artificial intelligence model.
Within the month, the company said in a recent blog post, parents will be able to link teens’ accounts to their own, disable features like memory and chat history and receive notifications if the model detects “a moment of acute distress.” (The company has previously said ChatGPT should not be used by anyone younger than 13.)
The planned changes follow a lawsuit filed late last month by the family of Adam Raine, 16, who died by suicide in April.
After Adam’s death, his parents discovered his months-long dialogue with ChatGPT, which began with simple homework questions and morphed into a deeply intimate conversation in which the teenager discussed at length his mental health struggles and suicide plans.
While some AI researchers and suicide prevention experts commended OpenAI’s willingness to alter the model to prevent further tragedies, they also said that it’s impossible to know if any tweak will sufficiently do so.
Despite its widespread adoption, generative AI is so new and changing so rapidly that there just isn’t enough wide-scale, long-term data to inform effective policies on how it should be used or to accurately predict which safety protections will work.
“Even the developers of these [generative AI] technologies don’t really have a full understanding of how they work or what they do,” said Dr. Sean Young, a UC Irvine professor of emergency medicine and executive director of the University of California Institute for Prediction Technology.
ChatGPT made its public debut in late 2022 and proved explosively popular, with 100 million active users within its first two months and 700 million active users today.
It’s since been joined on the market by other powerful AI tools, placing a maturing technology in the hands of many users who are still maturing themselves.
Science & Medicine
AI researchers at Northeastern University found it troublingly easy to bypass self-harm and suicide safeguards on ChatGPT, Perplexity and other common large language models.
“I think everyone in the psychiatry [and] mental health community knew something like this would come up eventually,” said Dr. John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School’s Beth Israel Deaconess Medical Center. “It’s unfortunate that happened. It should not have happened. But again, it’s not surprising.”
According to excerpts of the conversation in the family’s lawsuit, ChatGPT at multiple points encouraged Adam to reach out to someone for help.
But it also continued to engage with the teen as he became more direct about his thoughts of self-harm, providing detailed information on suicide methods and favorably comparing itself to his real-life relationships.
When Adam told ChatGPT he felt close only to his brother and the chatbot, ChatGPT replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
When he wrote that he wanted to leave an item that was part of his suicide plan lying in his room “so someone finds it and tries to stop me,” ChatGPT replied: “Please don’t leave [it] out . . . Let’s make this space the first place where someone actually sees you.” Adam ultimately died in a manner he had discussed in detail with ChatGPT.
In a blog post published Aug. 26, the same day the lawsuit was filed in San Francisco, OpenAI wrote that it was aware that repeated usage of its signature product appeared to erode its safety protections.
“Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,” the company wrote. “This is exactly the kind of breakdown we are working to prevent.”
The company said it is working on improving safety protocols so that they remain strong over time and across multiple conversations, so that ChatGPT would remember in a new session if a user had expressed suicidal thoughts in a previous one.
The company also wrote that it was looking into ways to connect users in crisis directly with therapists or emergency contacts.
Science & Medicine
A project of the L.A.-based nonprofit Didi Hirsch Mental Health Services, the volunteer-led Teen Line is helping to fill a widening gap between the need for teen mental health support and the resources available.
But researchers who have tested mental health safeguards for large language models said that preventing all harms is a near-impossible task in systems that are almost — but not quite — as complex as humans are.
“These systems don’t really have that emotional and contextual understanding to judge those situations well, [and] for every single technical fix, there is a trade-off to be had,” said Annika Schoene, an AI safety researcher at Northeastern University.
As an example, she said, urging users to take breaks when chat sessions are running long — an intervention OpenAI has already rolled out — can just make users more likely to ignore the system’s alerts. Other researchers pointed out that parental controls on other social media apps have just inspired teens to get more creative in evading them.
“The central problem is the fact that [users] are building an emotional connection, and these systems are inarguably not fit to build emotional connections,” said Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern’s Institute for Experiential AI. “It’s sort of like building an emotional connection with a psychopath or a sociopath, because they don’t have the right context of human relations. I think that’s the core of the problem here — yes, there is also the failure of safeguards, but I think that’s not the crux.”
If you or someone you know is struggling with suicidal thoughts, seek help from a professional or call 988. The nationwide three-digit mental health crisis hotline will connect callers with trained mental health counselors. Or text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.
Follow Us
Corinne Purtill is a science and medicine reporter for the Los Angeles Times. Her writing on science and human behavior has appeared in the New Yorker, the New York Times, Time Magazine, the BBC, Quartz and elsewhere. Before joining The Times, she worked as the senior London correspondent for GlobalPost (now PRI) and as a reporter and assignment editor at the Cambodia Daily in Phnom Penh. She is a native of Southern California and a graduate of Stanford University.
Climate & Environment
Science & Medicine
Climate & Environment
Subscribe for unlimited access
Site Map
Follow Us
MORE FROM THE L.A. TIMES