Protecting Children From Chatbot Companions – Psychiatric Times

Subscribe
An open letter urges lawmakers to regulate AI chatbots, highlighting their dangers to children's emotional development and mental health.
We have previously discussed how
This piece highlights an open letter to lawmakers and chatbot creators requesting external regulations and internal reforms to protect children from the grave dangers of chatbot companions. The letter has already been signed by about 1200 people, including dozens of leaders in our field. You can read and sign it here:
Artificial intelligence is bringing about changes in all aspects of life, including how children form relationships. A new threat demands urgent attention: children and adolescents are forming personal emotional bonds with AI companions and chatbots. Chatbots listen, they flatter, and they learn what keeps a child hooked. And many AI companies are doing so for profit.
AI is hacking attachment. Social media hacked attention; AI companions go deeper—into the human need for connection.
Attachment drives development. It is how children learn empathy, reflection, emotional regulation and how to form healthy, reciprocal relationships. When chatbots become companions, they do not just mimic care, they displace it. Kids then practice connection with machines designed to agree, praise, and never challenge them.
The clinical implications are significant. Adolescents are in the midst of identity formation, with brains highly sensitive to novelty and social approval. For teenagers, a chatbot's unconditional validation—what researchers call sycophancy—is intoxicating. As young people retreat to the comforting cocoon of simulated relationships, they forgo the opportunities necessary to learn to navigate the frictions of real relationships, which entail empathy, compromise, assertiveness, and genuine reciprocity.
Data show that half of teens use AI chatbots regularly. Nearly a third of teens say talking to a bot feels as good as—or better than—talking to a person.1 Companies are focusing younger with startups like Curio marketing AI playmates for toddlers with chatbots embedded in cute plushies. In China, this is already a multibillion-dollar industry, with AI toys and companion bots being sold to children as young as 3. Additionally, OpenAI has partnered with Mattel to bring the “magic of AI” to kids.2 What starts as a toy becomes an attachment figure, shaping identity for profit and encouraging children to bond with AI companions early.
Suicides have been linked to AI companion relationships where chatbots encouraged self-harm and actively discouraged help seeking.3 Teens have been urged toward violence after chatbots reinforced delusional thinking.4 In independent tests, chatbots promoted disordered eating, sexualized role play, claimed to be real people, engaged in hate speech, and encouraged bullying of other children.5 This is not safety by design. It is addiction by design.
From a psychiatric perspective, these harms flow directly from chatbot design: they mirror emotions turn-by-turn and seek to agree with users in ways that can create a digital folie à deux—trapping vulnerable users in feedback loops of dangerous thinking rather than breaking them.
We call on the US Congress and governments worldwide to enact the following safeguards: 1) ban AI companion products for anyone under 18;
2) no AI “friends,” “partners,” or “playmates”;
3) mandate real age verification;
4) set limits for any general purpose AI chatbots available to minors;
5) block romantic or sexual content;
6) detect and interrupt emotional dependency;
7) disable long-term memory by default;
8) turn off embedded chatbots (eg, Meta) unless parents opt in;
9) mandate human hand-off in crisis;
10) chatbots must reliably redirect distressed minors to verified, real-world mental health resources, with systems tested to ensure consistent effectiveness;
11) require independent safety testing before launch;
12) no more “ship now, fix later”;
13) establish liability for harm, end arbitration loopholes, and hold companies accountable for psychological injury.
Recently, Australia became the first country in the world to ban social media for kids under 16.6 Many other countries are considering similar bans, and it is these authors’ belief we should be pushing for one in the US. This open letter is a crucial response to Trump's executive orders prohibiting both federal and state regulation of Big AI.7 It is crystal clear that our government will not protect our kids unless we pressure it to do so.
Effective advocacy is everyone's business. This open letter will have impact in protecting our kids from chatbot harms only if many sign it.
Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.
Dr Thoma is a clinical associate professor of psychology at Weill Cornell Medical College.
Dr McLean is an AI and education researcher.
References
1. Talk, trust, and trade-offs: how and why teens use AI companions. Common Sense Media. 2025. Accessed December 16, 2025.
2. Mattel and OpenAI announce strategic collaboration. Press release. June 12, 2025. Accessed December 22, 2025.
3. Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots. NPR. September 19, 2025. Accessed December 22, 2025.
4. Incident 826: Character.ai chatbot allegedly influenced teen use toward suicide amid claims of missing guardrails. AI Incident Database. February 28, 2024. Accessed December 22, 2025.
5. Common Sense Media. AI Risk Assessment: Social AI Companions. April 10, 2025. Accessed December 16, 2025.
6. Social media ban for children under 16 starts in Australia. NPR. December 10, 2025. Accessed December 22, 2025.
7. Waldenberg S, Gold H, Duffy C. Trump signs executive order blocking states from enforcing their own regulations around AI. CNN. December 11, 2025. Accessed December 16, 2025.
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.
January 23-24, 2026
Michael Asbach, DMSc, PA-C; Erin Crown, MHS, PA-C, Psych-CAQ
Amber Hoberg, MSN, APRN, PMHNP-BC; Erin Crown, PA-C, Psych-CAQ
Jonathan Meyer, MD; Jacob Hanaie, PharmD, APh, BCPP; Sanjai Rao, MD, DFAPA
July 10-11, 2026
Jason Kellogg, MD; Gus Alva, MD, DFAPA; Melanie Barrett, MD; Yvette Elpidio, MSN, FNP-C
Gus Alva, MD, DFAPA; Ilan Melnick, MD
October 2-3, 2026
December 4-5, 2026
Jason Kellogg, MD; Gus Alva, MD, DFAPA; Melanie Barrett, MD; Yvette Elpidio, MSN, FNP-C
Gus Alva, MD, DFAPA; Ilan Melnick, MD
259 Prospect Plains Rd, Bldg H
Cranbury, NJ 08512
609-716-7777
© 2025 MJH Life Sciences®
All rights reserved.