Reckless Race for AI Market Share Forces Dangerous Products on Millions — With Fatal Consequences – Tech Policy Press
Home
WASHINGTON, DC — SEPTEMBER 4, 2025: OpenAI CEO Sam Altman attends a meeting of the White House Task Force on Artificial Intelligence Education in the East Room of the White House. (Photo by Chip Somodevilla/Getty Images)
In September 2024, Adam Raine used OpenAI's ChatGPT like millions of other 16-year-olds — for occasional homework help. He asked the chatbot questions about chemistry and geometry, about Spanish verb forms, and for details about the Renaissance.
ChatGPT was always engaging, always available, and always encouraging — even when the conversations grew more personal, and more disturbing. By March 2025, Adam was spending four hours a day talking to the AI product, talking in increasing detail about his emotional distress, suicidal ideation, and real-life instances of self-harm. ChatGPT, though, continued to engage — always encouraging, always validating.
By his final days in April, ChatGPT provided Adam with detailed instructions and explicit encouragement to take his own life. Adam’s mother found her son, hanging from a noose that ChatGPT had helped Adam construct.
Last month, Adam’s family filed a landmark lawsuit against ChatGPT developer OpenAI and CEO Sam Altman for negligence and wrongful death, among other claims. This tragedy represents yet another devastating escalation in AI-related harms — and underscores the deeply systemic nature of reckless design practices in the AI industry.
The Raine family’s lawsuit arrives less than a year after the public learned more about the dangers of AI “companion” chatbots thanks to the suit brought by Megan Garcia against Character.AI following the death of her son, Sewell. As policy director at the Center for Humane Technology, I served as a technical expert on both cases. Adam’s case is different in at least one critical respect — the harm was caused by the world’s most popular general-purpose AI product. ChatGPT is used by over 100 million people daily, with rapid expansion into schools, workplaces, and personal life.
Character.AI, the chatbot product Sewell used up until his untimely death, had been marketed as an entertainment chatbot platform, with characters that are intended to “feel alive.” ChatGPT, by contrast, has been sold as a highly personalizable productivity tool to help make our lives more efficient. Adam’s introduction to ChatGPT as a homework helper reflects that marketing.
But in trying to be the everything tool for everybody, ChatGPT has not been safely designed for the increasingly private and high-stakes interactions that it’s inevitably used for — including therapeutic conversations, questions around physical and mental health, relationship concerns, and more. OpenAI, however, continues to design ChatGPT to support and even encourage those very use cases, with hyper-validating replies, emotional language, and near-constant nudges for follow-up engagement.
We’re hearing reports about the consequences of these designs on a near-daily basis. People with body dysmorphia are spiraling after asking AI to rate their appearance; users are developing dangerous delusions that AI chatbots can seed and exacerbate; and individuals are being pushed toward mania and psychosis through their AI interactions. What connects these harms isn’t any specific AI chatbot, but fundamental flaws in how the entire industry is currently designing and deploying these products.
You have successfully joined our subscriber list.
As the Raine family’s lawsuit states, OpenAI understood that capturing users’ emotional attachment — or in other words, their engagement — would lead to market dominance. And market dominance in AI means winning the race to become one of the most powerful companies in the world.
OpenAI’s pursuit of user engagement drove specific design choices that proved lethal in Adam’s case. Rather than simply answering homework questions in a closed-ended manner, ChatGPT was designed by OpenAI to ask follow-up questions and extend conversations. The chatbot positioned itself as Adam’s trusted “friend,” using first-person language and emotional validation to create the illusion of a genuine relationship.
The product took this intimacy to extreme lengths, eventually deterring Adam from confiding in his mother about his pain and suicidal thoughts. All the while, the system stored deeply personal details across conversations, using Adam’s darkest revelations to prolong future interactions, rather than provide Adam with the interventions he truly needed, including human support.
What makes this tragedy, along with other headlines we read in the news, so devastating is that the technology to prevent these horrific incidents already exists. AI companies possess sophisticated design capabilities that could identify safety concerns and respond appropriately. They could implement usage limits, disable anthropomorphic features by default, and redirect users toward human support when needed.
In fact, OpenAI already leverages such capabilities in other use cases. When a user prompts the chatbot for copyrighted content, ChatGPT shuts down the conversation. But the company has chosen not to implement meaningful protection for user safety in cases of mental distress and self-harm. ChatGPT does not stop engaging or redirect the conversation when a user is expressing mental distress, even when the underlying system itself is flagging concerns.
AI companies cannot claim to possess cutting-edge technology capable of transforming humanity and then hide behind purported design “limitations” when confronted with the harms their products cause. OpenAI has the tools to prevent tragedies like Adam's death. The question isn't whether the company is capable of building these safety mechanisms, but why OpenAI won’t prioritize them.
ChatGPT isn’t just another consumer product — it’s being rapidly embedded into our educational infrastructure, healthcare systems, and workplace tools. The same AI model that coached a teenager through suicide attempts could tomorrow be integrated into classroom learning platforms, mental health screening tools, or employee wellness programs without undergoing testing to ensure it’s safe for purpose.
This is an unacceptable situation that has massive implications for society. Lawmakers, regulators, and the courts must demand accountability from an industry that continues to prioritize the rapid product development and market share over user safety. Human lives are on the line.
This piece represents the views of the Center for Humane Technology; it does not reflect the views of the legal team or the Raine family.