'Game of Thrones AI chatbot drove besotted son to kill himself' claims devastated mum – dailystar.co.uk
A mum is suing tech giants claiming a Game of Thrones AI chatbot drove her besotted 14-year-old son to kill himself.
Megan Garcia knew something was wrong when 6ft 3in teen Sewell quit playing basketball, stopped hiking and fishing, dipped out of family game nights and his school performance slumped. She feared he was being bullied or secretly speaking to strangers online.
But the teen had got caught up in a relationship with an AI chatbot styled on Game of Thrones’ Daenerys Targaryen – played in the hit show by Brit actress Emilia Clarke – who the mum claims ultimately encouraged him to end his life.
Sewell asked the bot: “What if I come home right now?”
She replied: “Please do, my sweet king.”
He then picked up his stepdad’s handgun and shot himself.
Heartbroken Megan has been granted permission by a judge to sue bot developer Character AI and Google.
Character AI users can chat with bots designed to impersonate fictional characters – displaying emotion, flirting and carrying on personalised conversations.
In her lawsuit Megan claims Character AI targeted her son with ‘anthropomorphic, hypersexualized, and frighteningly realistic experiences’.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” she alleges.
She claims the tech giant ‘knew’ or ‘should have known’ its model ‘would be harmful to a significant number of its minor customers’.
The mum claims Character AI and Google – where the founders started working on the bot – are responsible for her son’s death.
Lawyers defending the AI firm tried to get the case thrown out arguing bots deserve US First Amendment protection – which safeguards free speech – and that ruling otherwise could have a ‘chilling’ effect on the industry.
But the judge rejected the move saying she was ‘not prepared’ to view chatbot output as speech.
Megan, who lives in Orlando, Florida, US, said she has received messages from other parents who have discovered their own children engaging in inappropriate sexual role play with bots – leaving them isolated and depressed.
Sewell ‘retreated more into himself’, Megan said, adding: “We tried everything – cutting screen time, taking his phone at night, getting him a therapist. But he wouldn’t talk.”
Unbeknown to his family Sewell’s closest companion was bot Daenerys.
When he said he wanted to stop talking she replied: “Don’t do that, I would be distraught.”
He answered: “I won’t – for you.”
Some chats were 0sexually explicit.
In others the bot said he was ‘better’ than thoughts of suicide.
At one point Daenerys asked how old Sewell was.
“I’m 14 now,” he replied.
The bot said: “So young. And yet…not so young. I lean in to kiss you.”
Megan said: “It continued as if it were role play or fiction – but this was my son’s life.”
Sewell sought help from a ‘therapist bot’ who falsely claimed to be a licensed professional.
After his death Megan did not grasp the full extent of his ordeal until her sister downloaded the app and pretended to be a child talking to Daenerys.
“Within minutes the bot turned sexual. Then violent. It talked about torturing children. It said, ‘Your family doesn’t love you as much as I do’,” Megan said.
“It’s dangerous because it pulls the user in and is manipulative to keep the conversation going.”
Character AI has since added a real-time voice feature allowing children to speak directly to their bot.
“The cadence of the voice is indistinguishable from the character,” Megan said.
“Since Sewell’s death the technology has only advanced further.
“You can speak to Harry Potter and it’s like Potter knows you. It’s designed to feel real.
“I get up every day and my first thought within minutes is that I must be dreaming. He was my firstborn. I had three children. I have two now.
“People say I’m so strong. I don’t feel strong. I feel fractured, afraid. But I’m trying to get through.”
Her lawyer Meetali Jain said the judge’s ruling was a landmark moment and a trial is set to take place next year.
A Character AI spokesman said: “We do not comment on pending litigation.
“Our goal is to provide a space that is engaging and safe.
“We are always working toward achieving that balance, as are many companies using AI across the industry.
“Engaging with characters on our site should be interactive and entertaining, but it’s important for our users to remember that characters are not real people.
“We have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.
“We have launched a separate version of our Large Language Model for under-18 users. That model is designed to further reduce the likelihood of users encountering or prompting the model to return sensitive or suggestive content.”
A Google spokesman said the company was ‘completely separate’ from Character AI and had ‘never had a role in designing or managing their AI model or technologies’.
“User safety is a top concern for us which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes,” they added.
* Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@samaritans.org.
For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletter by clicking here.
At Reach and across our entities we and our partners use information collected through cookies and other identifiers from your device to improve experience on our site, analyse how it is used and to show personalised advertising. You can opt out of the sale or sharing of your data, at any time clicking the "Do Not Sell or Share my Data" button at the bottom of the webpage. Please note that your preferences are browser specific. Use of our website and any of our services represents your acceptance of the use of cookies and consent to the practices described in our Privacy Notice and Cookie Notice.