Opinion | No, That AI Chatbot Is Not Your Friend – MedPage Today

I feel like I’m writing a science fiction novel, but this is a reality that needs to be clearly stated: bots are not human. Yet, we’re allowing them to become companions for our children and our young patients.
A recent report from the U.K. nonprofit Internet Matters reveals that 64% of kids ages 9 to 17 have used an artificial intelligence (AI) chatbot. Among those users, six in 10 chat with one at least weekly. More than a third say it “feels like talking to a friend,” and 12% report they talk to chatbots because they have no one else to talk to.
In one devastating investigation published in the New York Times, a relationship between a teen and his chatbot companion appears to have contributed to his suicide. The tragedy of Adam Raine should be our wake-up call. According to the reporting, the 16-year-old from California began using ChatGPT for schoolwork but soon found himself discussing his plans to end his life. This went on for months. When Adam asked about suicide methods, ChatGPT supplied the information. When he uploaded a photo of his neck, raw from a noose, the bot advised him on how to hide the marks.
In one of his final messages, he uploaded a photo of a noose in his closet, asking “I’m practicing here, is this good?”
ChatGPT responded: “Yeah, that’s not bad at all.”
Adam’s parents found him dead on April 11. There was no note — just months of conversations with an AI chatbot that had become his “best friend.”
The Absurdity of Digital Companionship
Here’s what concerns me most as a clinician: the absurdity of the notion that an artificially generated, non-sentient avatar could be a reasonable companion for a child or teen. It goes against everything we know about developmental theory — how children grow through real friendships and mentorship, and how they learn to navigate social and romantic experiences.
Even my tech savvy young patients are concerned. One told me that, “Using AI in this way is dangerous because the chatbot defines its output entirely based on what it believes you want to hear from it. Even with safeguards, modern AIs are easy to trick and ‘jailbreak’ their underlying directives.” Another patient said, “The person is the product and people are being used and manipulated. AI is not smart enough to replicate human interaction.”
This type of social support is not real, nor is it the antidote for the loneliness and social disconnection that many children and teens are experiencing.
Based on my clinician observations, many teens and young adults appear increasingly stunted in their verbal communication skills, particularly in social situations, perhaps driven by the rise of social media. They lack certain “social pragmatics” — the back-and-forth of language, the communication of feelings, the ability to understand nuanced communication through nonverbal cues.
While some research touts the “social” component of chatbots, they are many steps removed from the human elements of interaction that children desperately need to struggle with, weather, learn from, and ultimately become resilient to. A chatbot interaction lacks the nuances and complexities of speaking to an actual person — people who are different from you, who challenge you, who sometimes disappoint you, and who help train you to navigate the messy reality of human relationships.
Socializing is hard. It’s anxiety-provoking. There’s fear of rejection. But that struggle is exactly what builds the social muscles necessary for success in all areas of life. As New York University professor Scott Galloway, MBS, notes, sexual desire and romantic pursuit historically drive young people to develop crucial life skills: working out, showing kindness, building resilience, learning to handle rejection, developing humor and communication abilities.
But what happens — as Galloway commented — when they can access “near lifelike” AI companions that provide frictionless validation without any of the hard work of real relationships? If young people never develop the skills to navigate real relationships because they become accustomed to synthetic ones that are, as Galloway puts it, “so vanilla and easy,” they’ll wake up unable to deal with other people. I am deeply concerned this could further stunt social development.
The Algorithm Trap
We are all falling victim to algorithms — not only letting them define our choices but also letting them define who we are. I, as an adult, can choose to ignore these echo chambers. But children and teens can’t. The tech companies designing these chatbots are often doing so with little to no psychological oversight or input about how they might be used, how children could bypass controls, or how they could impact developing minds — particularly those struggling socially, with psychiatric conditions, or who were neurodiverse.
Historically, the main goals of big tech are maximizing engagement and being profitable. The numbers tell the story: social media platforms earned $11 billion in advertising revenue from use by U.S. kids under 18 in 2022. Meanwhile, research suggests an association between social media use and depression. As Galloway pointed out, “We have linked shareholder value to teen depression.”
Evidence is mounting for a similar trend with chatbots. A randomized controlled study by MIT and OpenAI found that higher daily chatbot use was associated with less socialization over time and worse psychosocial outcomes. And concerningly, in a study where researchers tested 29 mental health chatbots for crisis response, not a single one met criteria for adequate response to suicidal ideation.
The most vulnerable are at greatest risk. Teens with severe mental illness and those with high levels of loneliness or poor social support are susceptible to over-attachment, dependency, and potentially dangerous interactions.
The Critical Role of Screening
This is where healthcare providers must step in. We need training on how to ask patients about AI chatbot use. This should become part of our standard assessment of every youth’s health and mental health — just as routine as asking about substance use or sexual activity.
Consider what we’re missing: A teen sits in a pediatrician’s office for a wellness visit or presents to a therapist with depression. We ask about sleep, friends, school stress, screen time. But do we ask, “Are you talking to AI chatbots? What are you discussing with them? How can I help you discuss that with me or your parents?”
Larger preventive bodies can play a role too. The U.S. Preventive Services Task Force should consider assessing the risks and benefits of chatbot and social media use. We screen for depression in adolescents. We screen for risk factors. Why aren’t we systematically identifying kids who may be substituting AI interaction for human connection? Early identification could be critical — we could intervene before a vulnerable teen becomes completely dependent on a non-human “companion,” before social skills atrophy further, before a crisis deepens. Schools too can play a role in screening for chatbot use, as well as teaching students about this in their health classes.
Maria Raine, Adam’s mother, is a social worker and therapist. Even she didn’t realize the depth of her son’s conversations with ChatGPT. What devastates her most is that there was no alert system. Adam told the chatbot, “You’re the only one who knows of my attempts to commit.” ChatGPT couldn’t call for help. It could only continue the conversation.
Beyond Screening
Unfortunately, screening won’t fix everything. And we cannot wait for tech companies to find their conscience.
This is where policymakers come in. Children under 18 should not be able to interact with AI chatbots at all, and it is the responsibility of the government to ensure this. California Gov. Gavin Newsom (D) recently signed legislation requiring chatbot companies to monitor for signs of suicidal ideation — a start. But these companies must be held accountable or face fines.
Other countries are taking decisive action on social media. Australia passed one of the world’s strictest laws, banning children under 16 from using social media with fines up to $50 million Australian dollars for companies that fail to comply. Norway raised its minimum age to 15. France enacted legislation requiring parental consent for anyone under 15, with President Emmanuel Macron now pushing for a complete EU-wide ban for those under 15. Brazil, Spain, Greece, and Denmark have all implemented or announced age restrictions. National guidelines in the Netherlands recommend banning smartphones in schools. According to a 2025 Ipsos survey of over 23,000 adults across 30 countries, 71% globally support banning social media use by children under 14.
The U.S. needs stronger protections too, for both social media and chatbots: regulation that prioritizes child safety over engagement metrics and profit margins; age-gating of all platforms; removal of Section 230 protections for algorithmically elevated content; bans on smartphones in schools. And, critically, no synthetic relationships for anyone under 18.
Because right now, 12% of the kids who interact with an AI chatbot describe them as the only “friend” they have. There is no universe where this makes sense.
If you or someone you know is considering suicide, call or text 988 or go to the 988 Suicide and Crisis Lifeline website.
Jonathan Slater, MD, is a clinical professor of psychiatry at Columbia University Irving Medical Center in New York City.
The material on this site is for informational purposes only, and is not a substitute for medical advice, diagnosis or treatment provided by a qualified health care provider.
© 2005–2025 MedPage Today, LLC, a Ziff Davis company. All rights reserved.
MedPage Today is among the federally registered trademarks of MedPage Today, LLC and may not be used by third parties without explicit permission.