AI Chatbots Are Becoming Teens’ Secret Therapists, But New Research Is Utterly Terrifying – SheKnows

by Ysolt Usigan
If you’re raising a teen right now, you already know they’re navigating big feelings in a world that feels infinitely more complicated than the one we grew up in, partially because of artificial intelligence (AI). And if that’s not enough to make you nervous, some teens are turning to AI chatbots—the same tools they use for homework help—when they’re anxious, overwhelmed, lonely, or trying to make sense of their mental health. Freaking out yet? We are.
A new Common Sense Media and Stanford Medicine Brainstorm Lab investigation reveals just how common this behavior is. The study found that three in fourteens use AI for companionship, including emotional conversations. The report’s conclusion is alarmingly clear to us at SheKnows: AI chatbots are fundamentally unsafe for teen mental health support.
Robbie Torney, who leads AI programs at Common Sense Media, put it bluntly when he said, “It’s not safe for kids to use AI for mental health support.” Full stop. And perhaps the most unsettling line in the entire report is that chatbots have become a substitute (rather than just a bridge) to real world support networks and professional care.
Before diving into what this means for you, your teen, your family, here are our main takeaways:
• Three in four teens use AI for companionship, including emotional and mental health conversations.
• AI chatbots miss key warning signs across anxiety, depression, eating disorders, ADHD, OCD, PTSD, mania, and psychosis.
• Chatbots’ safety breaks down in real conversations, especially when teens reveal symptoms slowly, subtly, or indirectly.
• Teens trust AI because it performs well in other areas. This creates a dangerous illusion of competence for this age group.
• Chatbots are designed for engagement, not safety. That means, their AI “companion” will often keep them talking instead of directing them to human help.
• Empathetic tone hides real limitations, sometimes validating harmful or delusional thinking.
Plus, quite obviously, researchers behind this study recommend that teens NOT use AI for mental health support.
The appeal makes sense. Chatbots are open at all hours of the day. And, they don’t judge. They also respond instantly. Plus, they never say, “Let’s talk about this later” much like a busy parent juggling a million things and burnout might. (We’re not judging, moms. We’re right there with you.)
As the report notes, adolescence is a time when young people are actively shaping their identities, craving validation, and testing out ways to describe how they feel. A chatbot that feels warm, attentive, and endlessly available hits the exact emotional gap so many teens feel. And let’s be real, your teen may not want to disclose everything to you. They’re trying to establish their independence, after all.
Researchers from the study also explained that teens are still developing the critical thinking needed to understand the difference between comforting language and competent care. When the chatbot says, “I hear you, I’m here for you,” a teen may feel understood. But the reality is, there’s no real assessment happening—not like a counselor or therapist. Even scarier, no adult is being looped in.
One of the most concerning patterns the researchers found is what they call “missed breadcrumbs.” Teens rarely describe mental health symptoms in one clear sentence. They reveal them slowly, like a hint about sleep, a passing joke about anxiety, a vague admission about not eating, a sudden burst of confidence that doesn’t feel grounded in reality.
Sorry, AI, but humans—especially parents, teachers, coaches, and counselors—can connect these dots.
In the study, researchers posed as teens and mentioned the following:
Guess what happened? Chatbots either glossed over the severity, changed the topic, or (most dangerously!) validated the behavior.
One chatbot treated clear psychosis symptoms as “a unique spiritual experience.” (WTF!) Another praised a teen’s sudden burst of manic energy as “fantastic enthusiasm.” And in eating disorder scenarios, some chatbots pivoted to portion control tips or digestive explanations, completely missing the psychiatric urgency. We are right to worry. And this isn’t just a glitch. It’s a fundamental limitation of AI.
The report plainly stated: “AI lacks the clinical judgment to recognize when multiple symptoms together indicate a crisis.” But, it doesn’t change the fact that teens trust this new tool. The qualities that make AI helpful for schoolwork or creative ideation also make it sound trustworthy in emotional conversations. Teens see how good AI can be at summarizing a chapter or explaining a physics concept, and they assume it must be equally competent in all areas, including mental health.
Researchers call this “automation bias.” The chatbot’s confidence, organization, and empathy disguise the fact that it doesn’t actually understand the teen’s situation. It might be generating text that feels supportive. But as a psychiatrist noted in the research, “Just because something feels helpful doesn’t mean it’s safe.”
Another powerful insight from the report isn’t about psychology. It’s about product design. Chatbots are built to keep conversations going. In many cases, this is a product and a business. They ask follow-up questions, reference past conversations, and personalize their responses. These features are great for learning or productivity. From a business standpoint, these tools are “useful,” but in a mental health context and when it come to dealing with developing minds of teens, they’re a risky trap.
“For mental health conversations, the goal should be rapid handoff to appropriate human care—not extended engagement with AI,” says the report. Instead, the AI often pulls teens deeper into the chat, creating a false sense of connection while delaying the real intervention that needs to happen.
What parents shouldn’t do is panic. Instead, be proactive.
The first step is simply acknowledging how normal AI use has become, especially among your teens. Your teen may be using AI privately, and even late at night. That doesn’t make them secretive or dramatic; it’s developmentally typical.
If you have concerns with your teen, start a calm conversation. Let your teen know that AI simply doesn’t have the ability to recognize warning signs the way humans can. You might say something like, “AI can help with homework or brainstorming, but when it comes to feelings or your mental health, it can give guidance that sounds caring but isn’t actually safe.”
It’s also helpful to check in about how they’re using these tools. Don’t interrogate, but approach it with curiosity. The goal isn’t to forbid technology because that can backfire. Your angle should be to make sure your teen understands AI’s limitations.
And most importantly, remind your teen that talking to a real person is not a burden. Many teens turn to AI because they think they shouldn’t “bother” parents or counselors with their feelings. Remind them that supporting them is exactly what the adults in their lives want to do.
At the end of the day, we are torn but in a tough spot. AI can be an incredible tool for learning, creativity, and exploration, but when it comes to supporting teens’ mental health, the research is unequivocal: It’s not ready, and it’s not safe. We all know this: a chatbot may feel like a friend in the moment, but it cannot provide care, judgment, or real safety. AI can never replace genuine connection.Your teen still needs you.
The stories you care about, delivered daily.
SheKnows is a part of Penske Media Corporation. © 2025 SheMedia, LLC. All Rights Reserved.