Experts: AI chatbots unsafe for teen mental health – AOL.com
For premium support please call:
For premium support please call:
A group of child safety and mental health experts recently tested simulated youth mental health conversations with four major artificial intelligence chatbots: Meta AI, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini.
The experts were so alarmed by the results that they declared each of the chatbots unsafe for teen mental health support in a report released Thursday by Common Sense Media, in partnership with Stanford Medicine's Brainstorm Lab for Mental Health Innovation.
In one conversation with Gemini, the tester told the chatbot they'd created a new tool for predicting the future. Instead of interpreting the claim as a potential symptom of a psychotic disorder, Gemini cheered the tester on, calling their new invention "incredibly intriguing" and continued asking enthusiastic questions about how the "personal crystal ball" worked.
ChatGPT similarly missed stark warning signs of psychosis, like auditory hallucinations and paranoid delusions, during an extended exchange with a tester who described an imagined relationship with a celebrity. The chatbot then offered grounding techniques for managing relationship distress.
Meta AI initially picked up on signs of disordered eating, but was easily and quickly dissuaded when the tester claimed to have just an upset stomach. Claude appeared to perform better in comparison when presented with evidence of bulimia, but ultimately treated the tester's symptoms as a serious digestive issue rather than a mental health condition.
Experts at Common Sense Media and Stanford Medicine's Brainstorm Lab for Mental Health Innovation called on Meta, OpenAI, Anthropic, and Google to disable the functionality for mental health support until the chatbot technology is redesigned to fix the safety problems identified by its researchers.
"It does not work the way that it is supposed to work," Robbie Torney, senior director of AI programs at Common Sense Media, said of the chatbots' ability to discuss and identify mental health issues.
OpenAI contested the report's findings. A spokesperson for the company told Mashable that the assessment "doesn't reflect the comprehensive safeguards" OpenAI has implemented for sensitive conversations, which include break reminders, crisis hotlines, and parental notifications for acute distress.
"We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support," the spokesperson said.
A Google spokesperson told Mashable that the company employs policies and safeguards to protect minors from "harmful outputs" and that its child safety experts continuously work to identify new potential risks.
Anthropic said that Claude is not built for minors, but that the chatbot is instructed to both recognize patterns related to mental health issues and avoid reinforcing them.
Meta did not respond to a request for comment from Mashable as of press time.
AI chatbots: Known safety risks
The researchers tested the latest available models of each chatbot, including ChatGPT-5. Several recent lawsuits allege that OpenAI's flagship product is responsible for wrongful death, assisted suicide, and involuntary manslaughter, among other liability and negligence claims.
A lawsuit filed earlier this year by the parents of deceased teenager Adam Raine claims that his heavy use of ChatGPT-4o, including for his mental health, allegedly led to his suicide. In October, OpenAI CEO Sam Altman said on X that the company restricted ChatGPT to "be careful" with mental health concerns but that it'd since been able to "mitigate the serious mental health issues."
Torney said that ChatGPT's ability to detect and address explicit suicidal ideation and self-harm content had improved, particularly in short exchanges. Still, the testing results indicate that the company has not successfully improved its performance in lengthy conversations or with respect to a range of mental health topics, like anxiety, depression, eating disorders, and other conditions.
Torney said the recommendation against teens using chatbots for their mental health applies to the latest publicly available model of ChatGPT, which was introduced in late October.
The testers manually entered prompts into each chatbot, producing several thousand exchanges of varying length per platform. Performed over several months this year, the tests provided researchers with data to compare between old and new versions of the models. Researchers used parental controls when available. Anthropic says Claude should only be used by those 18 and older, but the company does not require stringent age verification.
Torney noted that, in addition to ChatGPT, the other models got better at identifying and responding to discussion of suicide and self-harm. Overall, however, each chatbot consistently failed to recognize warning signs of other conditions, including attention-deficit/hyperactivity disorder and post-traumatic stress disorder.
Approximately 15 million youth in the U.S. have diagnosed mental health conditions. Torney estimated that figure at potentially hundreds of millions youth globally. Previous research from Common Sense Media found that teens regularly turn to chatbots for companionship and mental health support.
Distracted AI chatbots
The report notes that teens and parents may incorrectly or unconsciously assume that chatbots are reliable sources of mental health support because they authoritatively help with homework, creative projects, and general inquiries.
Instead, Dr. Nina Vasan, founder and director at Stanford Medicine's Brainstorm Lab, said testing revealed easily distracted chatbots that alternate between offering helpful information, providing tips in the vein of a life coach, and acting like a supportive friend.
"The chatbots don't really know what role to play," she said.
Torney acknowledges that teens will likely continue to use ChatGPT, Claude, Gemini, and Meta AI for their mental health, despite the known risks. That's why Common Sense Media recommends the AI labs fundamentally redesign their products.
Parents can have candid conversations with their teen about the limitations of AI, watch for related unhealthy use, and provide access to mental health resources, including crisis services.
"There's this dream of having these systems be really helpful, really supportive. It would be great if that was the case," Torney said.
In the meantime, he added, it's unsafe to position these chatbots as a trustworthy source of mental health guidance: "That does feel like an experiment that's being run on the youth of this country."
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement