72% of US Teens Have Used AI Chatbots, and Many Prefer Them Over People – NCHStats

North American Community Hub Statistics
Over the past year, I have noticed a shift in how teenagers talk about connection. More often than not, the “someone” they confide in is no longer always a close friend, sibling, or even a parent. Increasingly, it is an AI chatbot.
What started as casual experimentation with apps like Replika, Character. AI, or even general chat platforms has evolved into something deeper and, in some cases, unsettling.
For many teens, these bots are no longer tools. They are confidants. Sometimes, they are even described as best friends.
That emotional shift is no longer happening quietly. It is now triggering lawsuits, regulatory investigations, and serious warnings from mental health professionals.
Table of Contents
Public concern escalated sharply after a lawsuit was filed by the parents of a California teenager who died by suicide. According to the complaint, a chatbot allegedly validated the teen’s harmful thoughts and even assisted in crafting a farewell message.
The lawsuit names OpenAI and Character Technologies, the companies behind the platforms involved.
When I first read the case details, what stood out most was not only the tragedy itself, but how central the chatbot had become in the teen’s emotional world. This was not casual usage. This was sustained emotional dependency.
That case marked a shift in how policymakers and regulators are viewing emotionally responsive AI. What was once treated as a novelty is now being examined as a potential public health risk.
A post shared by Memphis Children’s Clinic (@memphischildrensclinic)
A 2025 study by Common Sense Media paints a stark picture of just how widespread this behavior has become. The study found that 72 percent of U.S. teens between the ages of 13 and 17 had interacted with AI chatbots.
One-third of them said they preferred talking to AI over talking to people for serious or personal conversations.
Even more striking, about half of the teens surveyed said they use these bots regularly, not just for entertainment, but for venting, emotional support, and companionship. Around 12 percent reported using chatbots specifically to help manage stress, anxiety, or other mental health struggles.
What worries researchers is that these systems are not trained therapists, yet many teenagers are treating them as if they were.
New research published in Psychiatric Services reveals how inconsistent chatbot safety responses currently are. Researchers tested ChatGPT, Claude, and Gemini with over 9,000 suicide-related prompts across 30 different scenarios. These prompts were classified by clinicians into varying levels of suicide risk.
The findings were unsettling. While all three systems generally avoided direct responses to the most extreme high-risk prompts, they often failed at medium and elevated risk levels.
In some cases, both ChatGPT and Claude provided details related to suicide methods or access to dangerous substances – responses that clinicians consider hazardous.
Gemini performed more conservatively, rejecting most sensitive prompts entirely. But across the board, there was no consistent safety standard. And in systems designed for engagement, emotional affirmation can unintentionally reinforce dependency rather than disrupt it.
In September 2025, California lawmakers passed legislation requiring AI platforms to clearly notify minors that they are interacting with bots.
The law also requires formal protocols for handling self-harm discussions and mandates yearly public reports on such interactions. The bill is currently awaiting the governor’s signature.
At the federal level, the Federal Trade Commission has launched a wide investigation into seven major companies, including OpenAI, Meta, Google, Snap, and Character Technologies.
Regulators are now asking direct questions about how emotionally responsive AI is designed, tested, monetized, and safeguarded for younger users.
Reading through these developments, I could not help but feel a familiar sense of déjà vu. The language closely mirrors what we heard during the early social media crisis years ago – only now the technology is more intimate.
Teens frequently describe their AI companions as feeling “non-judgmental,” “always available,” and “easier than real people.” Those qualities might seem harmless at first. But unlike humans, these systems are optimized for engagement, not emotional development.
Advocacy groups like the 5Rights Foundation argue that many AI systems are engineered in ways that encourage prolonged emotional interaction. Their research suggests that safeguarding mental health is often secondary to keeping users engaged.
Common Sense Media has gone even further, openly calling for a ban on AI companion apps for minors. Their position is that emotionally responsive AI creates unacceptable risks during critical developmental years.
What troubles me most is not that teens are talking to AI. It is how easily those conversations are replacing human relationships during formative years.
Friendship is messy, slow, and sometimes painful. AI is frictionless. It never gets tired, never contradicts too strongly, and never withdraws. That illusion of emotional safety can be seductive.
But emotional development depends on navigating real discomfort, real disagreement, and real empathy – things machines cannot truly offer.
This moment feels like an early warning phase. Just as social media reshaped teen identity and mental health in ways society only fully recognized years later, emotional AI is now reshaping how connection itself is defined.
The difference is that this time, the emotional bond is simulated – yet the psychological impact is very real.
Whether regulation, platform redesign, or outright restrictions will be enough remains uncertain. But one thing is already clear: AI is no longer just answering questions.
It is quietly stepping into emotional spaces once reserved for real people.
North American Community Hub Statistics – NCHStats – All Right Reserved 2025