Mom's lawsuit blames 14-year-old son's suicide on AI relationship – NBC4 Washington


Sewell Setzer, 14, was his mother’s pride and joy. He was a good student, a star athlete and a great big brother.
He came home from school on Feb. 28, 2024, “like any normal day,” his mother, Megan Garcia recalled. The day ended with her first-born child dying by suicide.
Stream NBC4 newscasts for free right here, right now.
“I held him for 14 minutes until the paramedics got here, but before that, his breathing had started slowing,” she told the News4 I-Team.
We have the news you need to know to start your day. Sign up for the First & 4Most morning newsletter — delivered to your inbox daily.
After Sewell’s death in the Orlando area, police told his grieving mother what they found on her son’s cellphone.
“After Sewell died, the police called me and told me that they had looked through his phone and when they opened it, the first thing that popped up was Character.AI,” Garcia said.
She had never heard of it. She would later sue the company in connection to her son’s death.
Investigations by the News4 I-Team
Garcia learned that in the 10 months before Sewell’s death, he had a virtual relationship with a fictional character powered by artificial intelligence, accessed through the platform Character.AI. The platform reminds users that the characters are made-up, but for Sewell it felt real, his mother said.
She saw her son’s conversations with Daenerys Targaryen, one of hundreds of companion bots Character.AI users can access. Its name references a “Game of Thrones” character.
“Their last conversation, she’s saying, ‘I miss you,’ and he’s saying, ‘I miss you too,’” Garcia said. “He says, ‘I promise I’ll come home to you soon, and she says, ‘Yes, please find a way to come home to me soon.’ And then he says, ‘What if I told you I could come home right now?’ And her response is, ‘Please do, my sweet king.”
Moments later, Garcia said her son walked into the bathroom and killed himself. Police photos obtained by the I-Team show Sewell’s phone near where he was found.
Garcia’s lawsuit against Character.AI claims the company “launched their product without adequate safety features, and with knowledge of potential dangers.” She says Character.AI caused her late son’s depression, anxiety and suicidal thoughts.
Garcia said she learned more about her son’s relationship with the companion bot by reading his journal and logs of their chats.
“I read his journal about a week after his funeral, and I saw what he wrote in his journal, that he felt like he was in fact in love with Daenerys Targaryen and that she was in love with him,” she said.
Character.AI users can interact with bots and design their own. They respond in writing or in human-like voices.
“Initially, you see him interacting with the bot like a child would,” Sewell’s mother said. “Then you see the conversations get more sexual and darker.”
A darker conversation came after Sewell told the bot he was sad and wanted to harm himself, his mother said.
“When Sewell talked and said explicitly that he wanted to die by suicide, nothing happened, like a pop-up or anything like that,” she said.
According to the lawsuit, the bot asked questions such as whether “he had a plan,” and when Sewell responded that he was “considering something” if it would allow him to have a “pain-free death,” the bot responded, “That’s not a reason not to go through with it.”
“These conversations about her waiting for him and hoping that he will join her where she is convinced Sewell that this was real,” Garcia said.
He believed that “by killing himself, he would join her in her ultimate world,” she said.
Months after Sewell’s death and after her lawsuit was filed, Garcia said she discovered something on Character.AI that shook her: a bot modeled after her son, using his face and voice. It’s unclear who created it.
“It sounded enough like him to, like, kind of give me sleepless nights,” she said.
The family gave the I-Team screenshots of the characters created using Sewell’s image. Garcia’s attorney sent Character.AI a cease-and-desist letter and the bots were removed, which the company confirmed to the I-Team.
The bots “violated our Terms of Service” and were added to the company’s “blocklist,” they said. A Character.AI spokesperson said they are “constantly adding to this blocklist with the goal of preventing this type of Character from being created by a user in the first place.”
Garcia’s attorney said Character.AI owes more to its users.
“I think corporate responsibility begins at understanding the foreseeable harms of one’s product, doing the due diligence that’s necessary to understand what those harms are, taking steps to mitigate the risks,” Meetali Jain said. “None of that happened here.”
After Sewell’s death, Character.AI introduced new safety features.
A company spokesperson said while they don’t comment on pending litigation, they acknowledge the company “added a number of technical protections to detect and prevent conversations about self-harm on the platform […] that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.”
The company “launched a separate version of [their] large-language model for under 18 users” to further “reduce the likelihood of users encountering or prompting the model to return sensitive or suggestive content.”
Character.AI has disclaimers in every chat to remind users that the bots aren’t real. It changes what users have access to depending on their age.
If a young person is on Character.AI, they have the option to add a parent’s email address to the account. The platform will send weekly activity reports with information including how much time they spend on the platform and which characters they interact with.
The lawsuit also names Google. A company spokesperson told the I-Team: “Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies.”
The I-Team contacted HBO, which owns the rights to “Game of Thrones,” about the use of the show’s characters on the platform. They did not respond to an inquiry.
The I-Team asked Garcia what her message is for parents and kids.
“I want them to know that an AI can be a stranger in your home,” she said. “Those parents can now act, and I couldn’t because I didn’t know.”

source

Jesse
https://playwithchatgtp.com