ChatGPT to add parental controls for teen users within the next month – NBC News
Profile
Sections
Local
tv
Featured
More From NBC
Follow NBC News
news Alerts
There are no new alerts at this time
OpenAI says parents will soon have more oversight over what their teenagers are doing on ChatGPT.
In a blog post published on Tuesday, the artificial intelligence company expanded on its plans have ChatGPT intervene earlier and in a wider range of situations when it detects users’ potential mental health crises that may lead to harm.
The company’s announcement comes a week after OpenAI was hit with its first wrongful death lawsuit, from a pair of parents in California who claim ChatGPT is at fault for their 16-year-old son’s suicide.
OpenAI did not mention the teen, Adam Raine, in its Tuesday post. However, after the lawsuit was filed, the company alluded that changes were on the horizon.
Within the next month, parents will be able to exert more control over their teens’ use of ChatGPT, OpenAI said. The company will allow parents to link their accounts with their children’s, set age-appropriate rules for ChatGPT’s responses and manage features like the bot’s memory and chat history.
Parents will soon also be able to receive notifications when ChatGPT detects that their teen is “in a moment of acute distress,” according to OpenAI’s blog post. It would be the first feature that prompts ChatGPT to flag a minor’s conversations to an adult, a measure some parents have been asking for due to concern that the chatbot isn’t capable of de-escalating crisis moments on its own.
When Adam Raine told GPT-4o about his suicidal ideation earlier this year, the bot at times actively discouraged him from seeking human connection, offered to help him write a suicide note and even advised him on his noose setup, according to his family’s lawsuit. ChatGPT did prompt Adam multiple times with the suicide hotline number, but his parents say those warnings were easy for their son to bypass.
In a previous blog post following news of Raine’s wrongful death lawsuit, OpenAI noted that its existing safeguards were designed to have ChatGPT give empathetic responses and refer users to real-life resources. In certain cases, conversations may be routed to human reviewers if ChatGPT detects plans of causing physical harm to themselves or others.
The company said that it’s planning to strengthen safeguards in longer conversations, where guardrails are historically more prone to break down.
“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” it wrote. “We’re strengthening these mitigations so they remain reliable in long conversations, and we’re researching ways to ensure robust behavior across multiple conversations.”
These measures will add to the mental health guardrails OpenAI introduced last month, after it acknowledged that GPT-4o “fell short in recognizing signs of delusion or emotional dependency.” The rollout of GPT-5 in August also came with new safety constraints meant to prevent ChatGPT from unwittingly giving harmful answers.
In response to OpenAI’s announcement, Jay Edelson, lead counsel for the Raine family, said OpenAI CEO Sam Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market.”
The company chose to make “vague promises” rather than pull the product offline as an emergency action, Edelson said in a statement.
“Don’t believe it: this is nothing more than OpenAI’s crisis management team trying to change the subject,” he said.
The slew of safety-focused updates come as OpenAI faces growing scrutiny for reports of AI-propelled delusion from people who relied heavily on ChatGPT for emotional support and life advice. OpenAI has struggled to rein in ChatGPT’s excessive people-pleasing, especially as some users rioted online after the company tried to make GPT-5 less sycophantic.
Altman has acknowledged that people seem to have developed a “different and stronger” attachment to AI bots compared to previous technologies.
“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” Altman wrote in an X post last month. “Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way.”
Over the next 120 days, ChatGPT will start routing some sensitive conversations, like those displaying signs of “acute distress” from a user, to OpenAI’s reasoning models, which spend more time thinking and working through context before answering.
Internal tests have shown these reasoning models follow safety guidelines more consistently, according to OpenAI’s blog post.
The company said it will lean on its “Expert Council on Well-Being” to help measure user well-being, set priorities and design future safeguards. The advisory group, according to OpenAI, comprises experts across youth development, mental health and human-computer interaction.
“While the council will advise on our product, research, and policy decisions, OpenAI remains accountable for the choices we make,” the company wrote in its blog post.
The council will work alongside OpenAI’s “Global Physician Network,” a pool of more than 250 physicians whose expertise the company says it draws on to inform its safety research, model training and other interventions.
Angela Yang is a culture and trends reporter for NBC News.
© 2025 NBCUniversal Media, LLC