ChatGPT Users Showing Signs of Distress? OpenAI Shares Alarming Stats – KnowTechie

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.
OpenAI is adding new safeguards, like an age-detection system to catch kids using ChatGPT and stronger parental controls.
by
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
In a rare peek behind the curtain, OpenAI dropped some sobering numbers this week: out of ChatGPT’s whopping 800 million weekly users, roughly one million people are having conversations that show signs of suicidal thoughts or planning.
That’s about 0.15% of users, a tiny fraction statistically, but a pretty huge number of humans in absolute terms.
And that’s not all: OpenAI says a similar number of people appear emotionally attached to ChatGPT, with “hundreds of thousands” showing signs of psychosis or mania in their chats.
The company insists these cases are “extremely rare,” but when your user base is bigger than most countries, even “rare” gets scary fast.
OpenAI shared the data as part of a wider update on how it’s trying to make ChatGPT respond better to mental health crises.
The company says it worked with over 170 clinicians to teach the AI how to handle these conversations more “appropriately and consistently.” That’s not a purely academic concern.
In recent months, headlines have spotlighted tragic cases, like a 16-year-old whose parents are now suing OpenAI after their son shared suicidal thoughts with ChatGPT before taking his life.
Meanwhile, state attorneys general in California and Delaware are warning the company to do more to protect young users.
CEO Sam Altman recently took to X to claim OpenAI has “mitigated serious mental health issues,” though the new data suggests that “mitigated” might be doing some heavy lifting.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have…
The company says its new GPT-5 model now gives “desirable” responses to mental health issues about 65% more often than before and hits a 91% compliance rate in suicide-related scenarios, up from 77%.
OpenAI’s also adding new safeguards, like an age-detection system to catch kids using ChatGPT and stronger parental controls.
Still, the company admits older, less-safe models like GPT-4o are still widely available.
So while GPT-5 may be getting smarter and safer, the question remains: can any AI really handle the weight of human despair, or should it even try?
Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.
Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.
Your email address will not be published.
You can upload documents, spreadsheets, or research papers, and Gemini will tailor your slides…
The new system is said to create music from both text and audio prompts.
AI scams are ramping up, but don’t worry—staying sharp and verifying before you act…
Nearly half (45%) of all AI responses contained at least one major error.
Reddit’s complaint targets four defendants, with the most headline-grabbing being Perplexity AI.
The announcement comes as the entertainment world grapples with how far AI should go.
The browser also features AI summarization for webpages, making long reads a bit more…
Anthropic says usage has grown 10x since its May launch, now generating a cool…
As an Amazon Associate and affiliate partner, we may earn from qualifying purchases made through links on this site.
Copyright © 2025 KnowTechie LLC / Powered by Kinsta