OpenAI and Meta say they’re fixing AI chatbots to better respond to teens in distress – Winnipeg Free Press


Winnipeg
13° C, Cloudy with wind
Full Forecast
© 2025 Winnipeg Free Press
Quick Links
Ways to support us
Replica E-Edition
Business
Arts & Life
Sports
Opinion
Media
Homes
Canstar Community news
About Us
This browser doesn’t support push notifications at the moment. Check browsers features, update your browser or try to use one from the list of recommended to manage your notifications settings:
If you wish to manage your notification settings from this browser you will need to update your browser’s settings for this site. Just click button below and allow notifications for this site
Note Safari 16.4+ working on iOS devices also need this site app to be installed at device’s Home Screen for Push Notifications to work
Notifications are blocked for this site. If you wish to manage your notification settings from this browser you will need to update your browser’s settings. Usually you’d need to click on site options icon to the left of address bar and change notifications preferences/permissions from there
Urgent and important stories

Noteworthy news and features

Advertisement
Learn more about Free Press Advertising solutions
Advertisement
Advertise with us
Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
Read this article for free:

Already have an account? Log in here »
To continue reading, please subscribe:
$1 per week for 24 weeks*
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
$4.75/week*
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
$1 for the first 4 weeks*
No thanks
*$1 will be added to your next bill. After your 4 weeks access is complete your rate will increase by $0.00 a X percent off the regular rate.
Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
Read unlimited articles for free today:

Already have an account? Log in here »
Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new controls enabling parents to link their accounts to their teen’s account.
Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.
Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
The announcement comes a week after the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
Jay Edelson, the family’s attorney, on Tuesday described the OpenAI announcement as “vague promises to do better” and “nothing more than OpenAI’s crisis management team trying to change the subject.”
Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market,” Edelson said.
Meta, the parent company of Instagram, Facebook and WhatsApp, also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
A study published last week in the medical journal Psychiatric Services found inconsistencies in how three popular artificial intelligence chatbots responded to queries about suicide.
The study by researchers at the RAND Corporation found a need for “further refinement” in ChatGPT, Google’s Gemini and Anthropic’s Claude. The researchers did not study Meta’s chatbots.
The study’s lead author, Ryan McBain, said Tuesday that “it’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps.”
“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” said McBain, a senior policy researcher at RAND and assistant professor at Harvard University’s medical school.
Advertisement Advertise With Us
Advertisement
Learn more about Free Press Advertising solutions

source

Jesse
https://playwithchatgtp.com