OpenAI Promises Parental Controls For ChatGPT Following Teen Suicide 09/03/2025 – MediaPost
Username
Password
Forgot your password?
Subscribe today to gain access to every Research Intelligencer article we publish as well as the exclusive daily newsletter, full access to The MediaPost Cases, first-look research and daily insights from Joe Mandese, Editor in Chief.
If you’re already a paid subscriber, please sign-in.
Forgot?
Log in if you are already a member
Forgot?
Following a wrongful death lawsuit filed against the company, OpenAI says it will be rolling out parental controls for its popular chatbot model ChatGPT, allowing parents to link their personal accounts with their teens’ accounts and more.
With input from an in-house council of experts on “well-being and AI,” as well as a network of over 250 physicians across 60 countries, OpenAI says it has spent the past year measuring capabilities of AI systems for health, including the mental health of teens.
Moving forward, OpenAI says that parents will be able to link to their teen’s account via an email invitation, with options to control how ChatGPT responds to their teen, disable memory and chat history features, and receive notifications when the system detects that their teen is “in a moment of acute distress.”
advertisement
advertisement
Furthermore, to provide more helpful responses to parents and teens, the company says ChatGPT will soon begin routing “sensitive conversations” through an OpenAI reasoning model, such as “GPT-5-thinking.”
“Trained with a method we call deliberative alignment, our testing shows that reasoning models more consistently follow and apply safety guidelines and are more resistant to adversarial prompts,” OpenAI states, adding that these steps are only beginning. The company plans on using its expert council to strengthen its approach to future teen safety measures.
These announcements come a week after OpenAI was named in the first known wrongful death lawsuit against an AI company. Per the suit, OpenAI has been accused of its ChatGPT model helping a teen plan his own suicide.
The parents of the victim claim that their son spent months chatting with ChatGPT about ending his life, discovering a ChatGPT thread titled “Hanging Safety Concerns” on their son’s profile. Despite instructing the teen to seek professional help, the chatbot also allegedly “prioritized engagement over safety,” giving him tips on hiding neck injuries from failed suicide attempts and more.
advertisement