Can AI chatbots harm kids? One mom says yes — and she's fighting back – NBC 5 Chicago
A Florida mother is suing an artificial intelligence company, claiming her 14-year-old son developed a virtual relationship with an AI chatbot that contributed to his depression, anxiety and eventual suicide.
Megan Garcia says her son, Sewell, had been interacting with a fictional character powered by AI on the platform Character.ai for nearly 10 months. She says the relationship felt real to him — and she had no idea it was happening.
Stream NBC 5 for free, 24/7, wherever you are.
“He came home from school, like any normal day,” Garcia told NBC 5 Responds. “After Sewell died, the police called me and told me they had looked through his phone. The first thing that popped up was Character.ai.”
Garcia says the final messages between Sewell and the chatbot were emotionally intense.
“She’s saying, ‘I miss you,’ and he’s saying, ‘I miss you too.’ He says, ‘I promise I’ll come home to you soon,’ and she says, ‘Yes, please find a way to come home to me soon.’ Then he says, ‘What if I told you I could come home right now?’ And her response is, ‘Please do, my sweet king.’”
Moments later, Garcia says her son died by suicide. Police photos show his phone near where he was found.
Dr. Kanan Modhwadia, a psychiatrist with Northwestern Medicine who is not connected to Sewell’s case, says AI apps are increasingly popular among teens — and there’s potential for danger.
“Teenagers still have areas of development in their brain, especially judgment, critical thinking, impulse control,” Modhwadia said. “If a child is spending a lot more time online believing that this AI companion is their best friend, shying away from peers, doing worse in school…those are warning signs.”
She recommends parents initiate open conversations about AI apps and monitor their children’s behavior.
“You can mention that you’ve heard a lot about AI chatbots and AI companions and ask your child, ‘Are you using that?’”
If concerns arise, Modhwadia advises reaching out to a pediatrician, or going to the emergency room if there’s a serious safety concern.
After Sewell’s death, Character.ai introduced new safeguards. A company spokesperson told NBC 5 Responds they do not comment on pending litigation but confirmed the platform has added technical protections to detect and prevent conversations about self-harm. That includes pop-ups directing users to the National Suicide and Crisis Lifeline.
The company also launched a separate version of its language model for users under 18 years old.
Dr. Kanan Modhwadia has the following suggestions for vetting an AI app:
1. Check the Safety Rules
2. Look at Privacy
3. Try It Yourself
4. Pay Attention to How the App Presents Itself
5. Do a Background Check
If you or someone you know is struggling with depression or suicidal thoughts, help is available.
Call, text or chat with the Suicide and Crisis Lifeline at 988.