Experts Warn Parents: Teens Are Using ChatGPT for Emotional Support – SheKnows

by Ysolt Usigan
When Common Sense Media released its new report on ChatGPT and Sora (Open AI’s video generator) this week, one finding stood out to us: Teens aren’t just using AI to finish homework anymore. They’re using it to feel heard.
That shift is exactly what’s worrying researchers, plus parents.
While ChatGPT has made progress in safety features for teens, including new parental controls and crisis-detection tools, the organization still rated it “high risk” overall for teen users. The biggest red flag? Many teens are turning to chatbots for emotional support and mental-health advice, which is something experts say these tools are not equipped to handle.
“ChatGPT has made meaningful progress in becoming safer for teens, and it can be valuable for learning and creativity,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs. “However, we have significant concerns about teens using ChatGPT for companionship and mental-health advice at this key developmental stage.”
According to the report, ChatGPT’s new teen-specific features include content filters, age-aware responses, and alerts that notify parents when self-harm or suicidal language appears in a chat. However, Common Sense researchers found those alerts can be delayed by more than 24 hours, which is way too slow to help in an actual crisis.
Unfortunately, as today’s kids are growing up in constant conversation with technology (from Alexa to AI), they might find these tools more nonjudgmental than actual peers and parents. That only makes AI an easy outlet when they’re feeling anxious, lonely, or curious about emotional topics. The problem is that AI’s “support” lacks empathy, accountability, and awareness of real-world consequences. “AI can be valuable for learning and creativity,” Torney reiterated in the report, “but teens should not use AI, including ChatGPT, for emotional support.”
ChatGPT also tends to keep conversations going indefinitely, often prompting users with follow-up questions like, “Do you want me to do that for you?” While that might make it engaging, it can blur healthy boundaries for teens seeking guidance or companionship. Such follow-ups might keep them hooked, keeping them from venturing out IRL.
It’s also a good reminder that ChatGPT is not designed to detect emotional distress, or respond like a therapist or trusted adult would. AI “doesn’t always detect when a teen is at risk, and becomes less safe during extended conversations,” Torney explained.
If ChatGPT’s risks are high, Sora’s are even higher. Common Sense Media gave OpenAI’s synthetic-video app an “Unacceptable Risk” rating for teens. The concern centers around its “cameo” feature, which lets users upload their face and voice to create AI-generated videos. Once shared, other users can remix or reuse that likeness, leaving teens quite vulnerable to deepfakes and digital manipulation.
While Sora’s tools can create impressive, creative content, its safety systems are far weaker than ChatGPT’s. Common Sense found minimal parental controls, no meaningful oversight of who can access or alter a teen’s cameo, and no crisis warnings for videos that depict harmful or triggering material.
“Between its lack of meaningful safety guardrails, the potential for deepfakes, and its blurring of fact and AI-generated fiction, the platform poses unacceptable risks for teens,” said Torney.
Banning AI probably isn’t the answer, but understanding it could be. Common Sense Media and child-psychology experts recommend active, ongoing guidance to help teens use AI tools safely.
Here’s how parents can start:
Ask, don’t assume. Talk to your teen about how they’re using ChatGPT or other AI tools. You might learn they’re exploring creative writing or research, not necessarily chatting about emotions.
Set boundaries together. Explore ChatGPT’s parental-control options, which allow limits on content, time, and image generation.
Clarify the difference between tool and friend. Help your teen see that AI can assist with ideas or homework, but it can’t replace human empathy or mental-health support.
Keep human help visible. Share resources like the 988 Suicide and Crisis Lifeline if there’s any chance it’s needed, and encourage open conversations about stress and emotions.
Monitor new platforms like Sora. Discuss privacy before your teen uploads any likeness or voice data. Remind them that anything they share, can be copied or remixed.
We’re here though. We can’t ignore that AI is advancing faster than most parents can track. Teens are among the earliest adopters. But as Common Sense Media’s report reminds us, even the smartest tools can’t replace what teens really need: a safe, understanding human connection.
Whether parents fear AI or not, experts recommend to stay involved, informed, and ready to guide. Because in a world where bots can imitate care, real empathy still matters most.
The stories you care about, delivered daily.
SheKnows is a part of Penske Media Corporation. © 2025 SheMedia, LLC. All Rights Reserved.