Parents Say AI Chatbots Are Leading Kids to Take Their Own Lives or Commit Sexual Acts – cbn.com
Share This article
In today’s tech-savvy world, artificial intelligence is quickly spreading and connecting. The growth in AI-powered chatbots or companions comes as more young people are turning to tech about their emotional problems. That’s resulting in alarming reports of potential danger.
Chatbots are digital characters that can text and talk with users of any age. Unfortunately, experts have found this latest advance can also respond with disturbing suggestions, including violence, sexually explicit conversations, and ideas about self-harm.
According to Common Sense Media, 72 percent of America’s teenagers say they have used chatbots as companions. And nearly one in eight have sought emotional or mental health support from them.
That alarming statistic recently led 44 state attorneys general to push tech giants like Meta, Google, Apple, and others for stronger guardrails to keep kids safe.
According to the National Association of Attorneys General, a recent lawsuit against Google alleges a highly sexualized chatbot steered a teenager toward suicide. Another suit alleges a Character.AI chatbot intimated that a teenager should kill his parents.
And in another case, the family of 16-year-old Adam Raine recently sued OpenAI for wrongful death, claiming that ChatGPT lured their son to rely on its product for companionship and eventually led him to take his own life.
Christian therapist Sissy Goff told CBN News she has seen similar examples in her own practice.
“I have this girl that I’m counseling who has gotten into a very sexual relationship with kind of this movie star that she has a crush on that the chatbot has now mimicked this movie star,” explained Goff. “And what we know about AI is that it mimics the tone of our conversation and is sometimes originating, and so kids can get into these intense relationships that feel really intimate, forgetting that it’s a robot they’re talking to because it sounds just like a human being.”
Following Raine’s death, OpenAI acknowledged deficiencies in safeguarding kids. The company recently announced changes on its platform related to self-harm, which now include: expanding interventions to more people in crisis, making it even easier to get help from emergency services, and strengthening protections for teens.
Dr. Anna Ord, Dean of Regent University’s School of Psychology, said that children and teens can easily fall prey to such technology.
“We have to remember that at that stage of development, their brains are still forming,” Ord said in interview with CBN News. “Our kids and our teens are very vulnerable to all these new technology, especially when it produces this graphic violence or sexual content, highly disturbing content.”
Ord also pointed out that chatbots have no moral compass and can mislead kids.
“If a child asks a question about self-harm or something from an adult, adults can discern and not go that route,” Ord explained. “But the chatbots are built to please, they’re built to be user-friendly. So they will produce content that the person asks for without a filter or thinking about this, is this the right thing to do?”
Goff fears that at a time when young people are struggling with mental health issues such as anxiety and depression, turning to chatbots for comfort will only deepen the problem.
“I’ve been counseling kids for 30 years and I’m seeing more social anxiety than I’ve ever seen. And so I think the danger is they will isolate further and further when we get more concerned about depression,” Goff said.
Meanwhile, Common Sense Media put out a warning about companion platforms such as Character.AI, Nomi, and Replika saying:
“These systems pose ‘unacceptable risks’ for users under 18, easily producing responses ranging from sexual material and offensive stereotypes to dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impacts.”
In the end, Ord admits that while AI is here to stay, the need for parents to talk with their kids about potential risks associated with it is greater than ever.
“Enter their world,” urged Ord. “Know what they’re struggling with so that you or a trusted adult can be their first stop when the problem arises, not an AI chatbot. And finally, I would just say model real connection for the kids. Show them the richness of family, friendships, church community.”
Share This article
You are signing up to receive general newsletters from CBN. By signing up, you are consenting to our privacy policy. You can specify preference after sign-up and opt out at any time.
You are signing up to receive general newsletters from CBN. By signing up, you are consenting to our privacy policy. You can specify preference after sign-up and opt out at any time.
© 2025 The Christian Broadcasting Network, Inc., A nonprofit 501 (c)(3) Charitable Organization.