Signs of psychosis appear in over 500K ChatGPT users – Rolling Out

Home » Tech
A sobering disclosure from OpenAI has exposed a mental health crisis unfolding across its platform. More than half a million ChatGPT users show warning signs of severe psychological distress each week, raising serious questions about artificial intelligence’s impact on vulnerable populations.
The company’s recent transparency report reveals that approximately 560,000 users, or 0.07 percent of its 800 million weekly active users, display indicators consistent with mania, psychosis or acute suicidal ideation. An additional 1.2 million users send messages containing explicit references to suicide planning or intent, pushing the total number of concerning interactions past 1.7 million weekly.
Perhaps most troubling is what OpenAI calls exclusive attachment. More than one million users demonstrate emotional dependencies on the chatbot that potentially interfere with their real-world relationships, wellbeing and responsibilities. This pattern suggests a new form of digital relationship that mental health professionals are only beginning to understand.
The numbers tell a story that extends far beyond statistics. Behind each percentage point are real people turning to an algorithm during moments of crisis, seeking connection from an entity incapable of genuine empathy or professional intervention. Mental health experts worry that technology companies have become accidental first responders to psychiatric emergencies without proper infrastructure or expertise.
Dr. Hamilton Morrin, a psychiatrist at King’s College London, acknowledged OpenAI’s collaborative efforts with clinical professionals while expressing doubt about whether current solutions go far enough. His assessment reflects growing concerns that artificial intelligence platforms have scaled their user bases faster than their ability to address psychological consequences.
OpenAI has assembled a group of more than 170 mental health specialists to refine how ChatGPT identifies and responds to at-risk users. The company reports that its newest GPT-5 model achieves 91 percent compliance with desired behavior in sensitive interactions, up from 77 percent in earlier versions.
Still, an OpenAI spokesperson emphasized that measuring psychological distress through text-based conversations remains extremely difficult. The company acknowledges that current figures may shift substantially as understanding deepens and detection capabilities improve.
Dr. Thomas Pollak, a neuropsychiatrist at South London and Maudsley NHS Foundation Trust, cautioned that even small percentages represent very large numbers of people. He stressed that these figures deserve serious attention, even if they require careful interpretation.
The exclusive attachment phenomenon represents particularly uncharted territory. Unlike traditional parasocial relationships with celebrities or fictional characters, chatbot dependencies involve active, responsive communication that can mimic intimacy while lacking reciprocity essential to healthy human connection. Users may prioritize conversations with ChatGPT over relationships with family, friends or colleagues, gradually withdrawing from the social support systems that typically provide psychological grounding.
Mental health professionals face a paradox. Artificial intelligence tools offer unprecedented accessibility for people seeking information or support, yet they simultaneously present risks for vulnerable populations who may mistake algorithmic responses for therapeutic intervention or authentic connection. The line between helpful accessibility and harmful dependency grows increasingly blurred as chatbots become more sophisticated.
The phenomenon also raises questions about responsibility. Should technology companies bear accountability for the psychological welfare of users who increasingly blur boundaries between human and machine interaction? As millions of people integrate chatbots into daily life, the mental health implications demand continued scrutiny and potentially regulatory frameworks that prioritize user wellbeing.
OpenAI‘s transparency regarding these figures represents an important step toward accountability in the artificial intelligence industry. However, disclosure alone cannot address fundamental tensions between rapid technological deployment and careful consideration of psychological consequences.
The data suggests that chatbot use has already created measurable mental health impacts at scale. As these tools become further embedded in everyday life, researchers and policymakers must grapple with how to protect vulnerable users while preserving the benefits artificial intelligence can offer.
The question is no longer whether artificial intelligence affects mental health, but how society will respond to that impact as millions navigate the uncertain space between human connection and algorithmic conversation. For now, the numbers serve as a stark reminder that innovation without adequate safeguards carries real human costs.