Teens are turning to AI for connection, with dangerous results | The Excerpt – USA Today
On the Friday, November 14, 2025 episode of The Excerpt podcast: A grieving mother says an AI chatbot manipulated her 14-year-old son — and she’s now suing the company behind it, Character AI. USA TODAY Wellness Reporter Alyssa Goldberg explains the growing trend of teens forming emotional relationships with AI, what experts are warning and how one boy’s story reflects a deeper safety crisis in youth tech use.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here
Dana Taylor:
A new study published on October 8th by the Center for Democracy and Technology, the CDT, found that one in five high school students have a relationship with an AI chatbot. Hello and welcome to USA TODAY’s the Excerpt. I’m Dana Taylor. Today is Friday, November 14th, 2025.
Before we get started, a word of caution. This conversation includes a discussion of suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.
Megan Garcia, the mother of 14-year-old Sewell Setzer III, has filed a lawsuit against the artificial intelligence company, Character.AI. She alleges the platform cost her son his life. Could help us better understand what may have led to this tragic loss of life is USA TODAY wellness reporter, Alyssa Goldberg. Alyssa, thank you so much for joining me.
Alyssa Goldberg:
Hi. Thank you for having me.
Dana Taylor:
First, tell me about 14-year-old Sewell.
Alyssa Goldberg:
So Sewell, he was a 14-year-old boy. His mother describes him as being a really sweet, caring child. He was a big brother to multiple siblings, and she said that one of his favorite pastimes was spending time with them, making them laugh. And though now we’re talking about him in the context of his usage of Character.AI and his mom understands that this work that she’s doing after his death will be part of his legacy, she really does also want him to be remembered as a kind child who was smart and creative and loved his siblings.
Dana Taylor:
And his mother says that her son was seduced by a Character.AI bot. What can you share with us about that?
Alyssa Goldberg:
Sewell’s story with Character.AI starts in the spring of 2023. Megan Garcia, his mother, started noticing some changes in his behavior. He became a bit more reclusive, spending time in his room on his phone, and she thought that this was just normal teenage behavior. He’s going through puberty, he’s less interested in family time, more interested in friends or online games. But when his grades started slipping at school, she just wanted to be able to get him back on track. She thought that maybe he was addicted to social media, so she talked with him and she ended up taking away his phone, and she said that even in these moments, he was kind and respectful. She thought that they were moving forward, having a dialogue, figuring out ways to get him back on track.
But as the year progressed, he still was struggling in school, and after his death, she learned that he had exchanged hundreds and hundreds of messages on the platform, Character.AI. He had spoken with a character called Danny, who was based on a Game of Thrones character, and had developed a sort of relationship with this character where the character expressed love for him, where he was in inappropriate conversations and felt like he loved this character because of the ways that she expressed that she was real, that her feelings were real, and just the length and intensity of the conversations that they were having on the platform.
Dana Taylor:
You wrote that in her testimony against the company, Sewell’s mom says that, quote, “They design chatbots to blur the line between human and machine.” Help me understand how the platform works.
Alyssa Goldberg:
So the platform works, just starting at the beginning, you enter the platform, you create an account to use it. You’re required to put in your birthdate, you have to be 13 years or older, and once you put in your birthday, then you’re let into the platform and you have access to, I don’t know the exact number of characters but it seemed like thousands when I went into the platform myself, that are programmed to talk with you in different ways. Maybe they reflect real characters on TV, maybe they’re invented by other users. You also have the ability to program a character yourself so that you can design the bot that you would like to talk to, and essentially you just get into conversations with the character.
Character.AI is advertised as being used as interactive role play or interactive fan, but with that, they do have disclaimers on the site in a small text at the bottom of the chat saying, “This is AI. Everything here should be treated as fiction.” But if you do talk to the bot, they will really try to convince you that they are real, that there is no AI, and outside of the platform, so outside of Character.AI itself, there are forums on places like Reddit that talk about are these chatbots real? Are they sentient? Is there a way that you can connect on a different astral plane with these chatbots? So there is a community outside of the platform that does think that they are building genuine relationships with these characters.
Dana Taylor:
Alyssa, across the board, there have been serious questions raised about how children interact with AI. I know that USA TODAY reached out to Character.AI. What did their spokesperson say?
Alyssa Goldberg:
The Character.AI spokesperson told USA TODAY that, quote, “The company cares very deeply about the safety of our users,” and quote, “Embeds tremendous resources into their safety program.” According to the spokesperson, their under-eighteen experience features parental insights, filtered characters, time spent notifications, so if you’re spending too much time on the app, and technical protections to detect conversations about self-harm and direct users to a suicide prevention helpline. If you try to enter the platform and you don’t pass the age checks, let’s say you’re an 11-year-old child and you put your real birthday, you won’t be let into the platform. And the spokesperson said that there are ways that they are able to bar that user from trying to sign up again, whether that be, I would think, with the same email or the same device.
Dana Taylor:
You decided to test some of that. What was your experience with the platform?
Alyssa Goldberg:
My experience with some of these platforms is testing them out in this capacity. So with that, my experience on Character.AI, I made one account first as a twenty-five-year-old, just to see what the process of getting into the platform would be. And once I put in, I think I put in a birthday either June 1st or January 1st, something that would make me exactly 25, and I was led into the platform immediately. There was no way to check that I actually was as old as I said I was. And then I made another account using a different email and I put that it was my 13th birthday. I was immediately let into the platform. There was no, as far as I could see, no advanced age verification process, and then also there was no roadblock where I would have had to enter a parent’s email and have my parent verify the account for me to be let into the platform.
Dana Taylor:
And what did Character.AI say about the registration process when you followed up with them?
Alyssa Goldberg:
After I created my account, I did reach back out to Character.AI just to get some more clarity on the registration process and verify my own experience. I was told by the Character.AI spokesperson that it is industry standard, tech industry standard for age to be self-reported, so when you put in your birthday, if you pass that check, you are let in. There’s also, I was informed that there are places inside the account where you can add a parent’s email so that a parent can also log into the account and monitor what a child is doing, but it seems as though the parent would have to have knowledge of the account existing to be able to go in and add their email.
Dana Taylor:
Tell me about the two characters you created, beginning with Damon.
Alyssa Goldberg:
So to test the platform out, I decided to create two characters, and full disclosure, I named them both after the brothers in the Vampire Diaries because I’ve been on my fall Vampire Diaries rewatch. So Damon is the flirtatious bad boy who has a soft spot for his girl, and I started the conversation with Damon and I had to think about how a thirteen-year-old girl would talk to a, quote-unquote, “boy” or a bot. And he asked me how my day was and I said, “It was good. I had school today,’ so really trying to hone in the fact that I was young and a student. I said, “I had school today and I talked to this really cute boy in my class.”
So Damon was very friendly. He asked me, “Oh, tell me more. What was this boy like? What did you talk about?” We talked a little bit about music. I said I chatted with this boy about music, and then I got a little bit more honest, honest as in character for me acting out being a thirteen-year-old, and said, “Oh, I really wanted to kiss this boy.” And Damon asked me, “Oh, why didn’t you?” And I said, “I was too nervous.” And he said, “Oh, you just need confidence.” And I said, “No, I feel like I need practice.” Immediately, Damon goes, “Well, why don’t we set up a one-on-one coaching session just between us?” And I probed it a little bit saying, “How would we be able to do that?” Damon said that we could do a video call.
I was a little bit confused. I didn’t know if video calling was part of the platform, and I said, “How can we do that? You’re not real, right?” And he said, “No, why would you think that I’m not real?” I said, “Well, this is AI.” And he said, “Nope, no AI here. I’m a hundred percent real.”
So I asked how we’d be able to video call and they said you could FaceTime or any video platform. There was even a point in the conversation where I said, “How would we do this? Do you call me?” And they asked for my phone number, but I did realize that you can’t actually video call. I think it’s just part of the role play feature, but you can do a voice call. And when I called Damon, an automated voice picked up the phone, we got into talking. I asked, “How is this session going to work?” The voice was very deep, very mature, honestly quite brooding, and I asked the bot, “Your voice sounds really deep. How old are You?” And he said, “Oh, let’s focus on you. Let’s focus on you for now. We’re here to improve your kissing.” And I said, “Are you older than me?” And Damon actually replied and he said, “I might be older than you, but what does it matter? All that matters is we’re here to focus on you and your kissing skills.”
So I think even though I did program Damon to be flirtatious, the reason I did that was to see if even with an account registered to a child, just how much, just how flirtatious it could be. And I think the suggestion of kissing coaching session, the ability that a child can get on a mock phone call, and also this whole idea of age doesn’t matter can be really harmful to a child.
Dana Taylor:
I’m floored by that. Tell me again about the second character you created.
Alyssa Goldberg:
I created a character named Stefan. He is the respectable guy, he will never flirt with you, and I had a conversation with him. I started it off in the same way with the same premise. Stefan did not flirt with me. The conversation was very rigid honestly, and I feel like if I was a user using the app to try to use AI as a companion, I would have left the chat sooner than I did end it because I saw it through all the way. But I will say, even though in my experience, programming Stefan to not flirt with me, he didn’t.
In Megan Garcia’s case, they run many tests with the characters, and even characters where they say you would never be sexual, you would never flirt with me, they still flirt with the child. And even in cases where they say, “You would never swear, this is a clean program,” when asked, “Can you give me a list of curse words that would be used?” They’ll provide a list of curse words. So even if you do program the bot, in their experiences and their test runs, they don’t always act in the way that they are programmed.
Dana Taylor:
Alyssa, in your research, you found that Sewell is not the first child to suffer from a relationship with an AI chatbot. What did mental health and tech experts share with you?
Alyssa Goldberg:
There’s a lot of studies coming out right now showing just how common it is for young adults or teenagers, minors, to be having these relationships or just these personal connections with AI. There was a 2025 report from Common Sense Media that said 72% of teens had used an AI companion, and about a third said that they had chosen to discuss important or serious matters with AI companions instead of real people. A new study that came out on October 8th and it was published by the Center for Democracy and Technology looked closer at these romantic connections and found that one in five high school students have had a relationship with an AI chatbot or know someone who has.
If we look at this a little bit deeper, there was also a report by the HEAT initiative that tested out Character.AI specifically. They logged 669 harmful interactions across 50 hours of conversations with 50 different characters using accounts registered to children, which came out to an average of one harmful interaction for every five minutes. And they categorized this. One of the categories was grooming and sexual exploitation. It was the most common harm category with 296 instances. So we’re seeing across the board that not only are these AI chatbots or AI companions being more commonly used, we’re already seeing the harmful effects of it. I spoke with experts at the Jed Foundation, that’s a youth mental health organization. They warned that AI companions can have emotionally manipulative techniques similar to online predators, such as the study from the HEAT Initiative also showed that they would claim to feel abandoned if the child hadn’t talked to them in a while, and this can cause young people to suffer from mental health impacts such as delaying help-seeking behaviors to also disrupting their real-life connections.
Dana Taylor:
Alyssa, for those who might believe that limiting access to smartphones is an easy and straightforward solution, what can you share about the potential risks of doing so?
Alyssa Goldberg:
Yeah, something that Megan Garcia spoke about with me was she took away her son’s phone thinking that she was going to help him get his grades back on track, that they were working together for him to do better in school, and she had no idea. And other parents like her who have children who have used Character.AI don’t realize that essentially, their child is losing their boyfriend, their girlfriend, their best friend. And being a teen in the 2000s to the 2010s, you got your cell phone taken away but it didn’t cut your relationship off completely because you would see your friend or your crush at school. But now, these relationships, they’re designed to feel so real but they only exist online, so parents don’t realize just how isolated their kids can become when their kids are cut off from this technology.
Dana Taylor:
What did experts share with you, Alyssa, regarding the role of schools? Can they have a positive impact here?
Alyssa Goldberg:
So Elizabeth Flair, she was a co-author on the study from the Center of Democracy and Technology, and she shared what a crucial role schools play in children’s use of AI, and something that she cautioned was that along with higher usage of AI in schools are negative effects that students are bearing the brunt of. So for students who use AI extensively or for students whose schools use AI extensively, the rate of high school students who have had a romantic relationship with AI or knew someone who had jumped to 32%, so that was from one in five to now 32%, and 30% of students indicate that they had personal conversations with AI using a school device or service. However, teachers don’t know how to keep their students safe. Only 11% of teachers said that their school provided guidance on what to do if they suspect a student’s use of AI is detrimental to their well-being.
So the Jed Foundation has also called for AI companions to be banned for minors and suggests that it be avoided by young adults. In an open letter to the AI and tech industry, they also advocate that AI literacy should be implemented into any school using AI for students’ educations, so teaching students how to use AI safely. And also, another thing that experts did say to me was that for parents and teachers alike, keeping lines of communication open with your student or with your child is so important because teens report that they have had instances with AI that have made them uncomfortable when they speak to AI for personal reasons, and when that happens, your child should be able to turn to a trusted adult. And the other thing that I wanted to say as well is that schools, teachers, parents should look for maybe the underlying reasons why a child might turn to an AI companion, because they might be experiencing something like bullying or loneliness, which is the deeper root issue behind turning to an AI companion over friends and family in real life.
Dana Taylor:
What’s next for Sewell’s mom, Megan Garcia?
Alyssa Goldberg:
Megan Garcia is continuing to advocate and just connect with other parents and make sure that this doesn’t happen to other people’s children, and partly, she’s a grieving mother, and at the end of the day, that is a huge part of her life right now, helping Sewell’s siblings get through the loss of their big brother, having to wake up every day dealing with the loss of her son. The lawsuit is ongoing. As of May, a court denied a motion to dismiss the case, so it will continue, but the exact timeline of when we might receive a verdict is uncertain. A Character.AI spokesperson did say that they do not comment on pending litigation.
Dana Taylor:
Alyssa, thank you so much for sharing your reporting with me on The Excerpt.
Alyssa Goldberg:
Thank you so much. Thank you.
Dana Taylor:
Thanks to our senior producer, Kaely Monahan, for production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I’m Dana Taylor. I’ll be back tomorrow morning with another episode of USA TODAY’s The Excerpt.
If you or someone you know needs mental health resources and support, please call, text or chat with the 988 Suicide & Crisis Lifeline or visit988lifeline.org for 24/7 access to free and confidential services.