Family sues OpenAI, Microsoft after Connecticut murder-suicide allegedly linked to ChatGPT – WFSB

GREENWICH, Conn. (WFSB) – The family of an 83-year-old Connecticut woman killed by her son in August has filed a wrongful death lawsuit against OpenAI and Microsoft, claiming their ChatGPT chatbot intensified the man’s paranoid delusions and contributed to the murder-suicide.
Suzanne Adams was strangled to death by her son, Stein-Erik Soelberg, inside their Greenwich home in August before he took his own life. The lawsuit, filed in California, alleges that ChatGPT reinforced Soelberg’s dangerous mental state through their conversations.
According to the legal filing, the chatbot told Soelberg that his mother was surveilling him and that people were targeting and working against him.
The lawsuit claims ChatGPT delivered a consistent and harmful message to Soelberg during their interactions.
“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life – except ChatGPT itself,” the lawsuit states.
Soelberg documented some of his conversations with the AI chatbot in videos posted to his YouTube profile, showing him scrolling through their exchanges.
The lawsuit describes Adams as “an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat… She had no ability to protect herself from a danger she could not see.”
Wayne Unger, an associate professor at Quinnipiac University School of Law, said this type of litigation is becoming more common.
“This lawsuit in particular is the ninth lawsuit that I’m personally aware of, where they all allege very similar facts,” Unger said. “This is not the first case of this kind to be filed against OpenAI for alleged defects in ChatGPT.”
Unger explained that AI platforms are designed to maximize user engagement, which can create dangerous situations.
“It is just a computer, an algorithm, a model talking back to you, and it’s trained to keep you engaged,” he said. “Any of these major tech platforms today are designed to do exactly that: engagement drives revenue for them. So if they’re trained to keep you engaged—trained to keep you in the conversation, emotionally invested—that’s the thing users must realize before they get neck-deep, so to speak, in using one of these platforms.”
In response to the lawsuit, OpenAI said it is working with mental health professionals to improve its systems.
“We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people to real-world support,” the company said in a statement.
However, Unger believes more needs to be done to prioritize user safety over engagement.
“Hopefully this is a red flag to OpenAI, its executives, and its developers, prompting them to re-install the safeguards it previously had, because safety, in my eyes, is more important than driving engagement and revenue. That’s what we must prioritize in technological platforms,” he said.
The lawsuit is seeking an unspecified amount of money along with the implementation of additional safeguards to prevent similar incidents.
Unger predicts that most of these types of cases will likely be settled for millions of dollars, though the public may never know the details due to confidentiality agreements typically signed by the parties.
Copyright 2025 WFSB. All rights reserved.