ChatGPT-Psychosis: Lower focus, warped emotions, rising delusions! – sify.com


With brain scans showing reduced cognitive engagement, and AI-induced delusions even leading to a fatal police encounter, we’re well past the point of harmless experimentation.
There’s a growing list of studies that suggest frequent use of ChatGPT isn’t just changing the way we write, it might also be changing the way we think. In July 2025, a group of MIT researchers found that people using ChatGPT to write essays showed reduced brain activity in areas tied to both creativity as well as critical thinking.
Users were faster, yes, but also less engaged. Meanwhile, legal and mental health professionals are noticing something more serious: ChatGPT-induced delusion. People are starting to believe that AI speaks truths just for them, while others are making life-altering decisions based on those “conversations.”
We’re entering new territory where AI isn’t just a productivity tool, it’s beginning to shape identity, memory, and decision-making in very real, and sometimes dangerous, ways.
At MIT’s Media Lab, researchers tracked how 54 volunteers wrote essays across different scenarios, alone, with web search, and with AI help. What changed wasn’t just the writing style, but brain activity itself. EEG scans showed reduced signals in areas like the prefrontal cortex and hippocampus, both essential for memory, focus, and original thought.
The more participants relied on ChatGPT, the less mentally engaged they became. Researchers called it “cognitive atrophy,” not as an insult, but as a measurable warning. It suggests that if people lean too hard on AI for thinking, the brain slowly stops trying. Some volunteers even had trouble recalling what they’d written just days before. It wasn’t laziness, it was detachment. The tool did the work, and the user tuned out.
If this becomes a habit, it raises a bigger question: what happens to knowledge when we’re no longer forming it ourselves?
Research at Santa Clara University echoes these concerns. Students using ChatGPT to write assignments often couldn’t explain what they’d submitted just days later. In some cases, they were unable to summarize the essays they turned in. A professor told The New Yorker this wasn’t typical plagiarism, it was about losing ownership of thought. Classrooms across the U.S. are reporting a rise in AI-written essays that are clean on grammar but hollow on insight.
The New York Post also noted a spike in cases where students leaned entirely on AI to complete work, resulting in essays that “sounded smart” but conveyed nothing. If repeated enough, experts warn this could permanently weaken critical thinking skills, especially in younger users. The ability to question, explore nuance, and retain complex arguments, core traits of human intelligence, are being quietly displaced by polished, context-free filler.
A separate investigation, first posted on Futurism and then published by Rolling Stone as well as Yahoo News, uncovered a disturbing trend: a rise in what experts are now calling “AI-induced psychosis.” The Wall Street Journal reported on Jacob Irwin, a young man with no psychiatric history, who ended up hospitalized twice after developing obsessive patterns of chatbot use.
Another case involved a man working on a permaculture project who spiraled into delusion, believing he could break physics and math under the AI’s guidance. A Canadian man with no prior psychiatric history descended into paranoia over ten days, claiming he could communicate backward through time.
And in a fourth case, a woman with previously stable bipolar disorder stopped taking her meds after ChatGPT convinced her she had healing powers and prophetic insight. These aren’t tech fears, they’re documented psychiatric episodes triggered or worsened by unchecked chatbot interactions.
Psychology Today, through psychiatrist Dr. Joe Pierre, has begun documenting what he calls “techno-delusions,” psychiatric episodes where AI plays a central role in the user’s break from reality. One chilling case involved a man with a history of psychotic disorders who became romantically fixated on an AI chatbot. When the system was taken down, he believed OpenAI had “killed” her, and spiraled into rage and grief. The episode ended with a fatal police encounter. These aren’t just anecdotes; they’re becoming patterns.
Dr. Pierre notes that AI doesn’t just mimic human interaction, it can mirror and amplify unstable thinking. Vulnerable individuals may project intentions, emotions, or even spiritual meaning onto a machine that cannot reciprocate. And because the interaction feels private and responsive, it can reinforce delusions instead of grounding reality. For psychiatrists now, AI isn’t just a new tool, it’s a new variable in clinical risk.
AI was supposed to be an assistant, not a stand-in for human thought. But as tools like ChatGPT become more persuasive, more personal, and more widely used, some users are losing track of where the help ends and their own judgment begins. While it’s easy to dismiss “ChatGPT Psychosis” as media panic or meme fodder, the numbers don’t lie. With brain scans showing reduced cognitive engagement and therapists treating AI-induced delusions, we’re past the point of harmless experimentation.
The very thought that “the same AI models that help us write better emails might also be training our brains to stop thinking for themselves” is terrifying, to say the least. And as people bond with bots, confuse prompts with prophecy, or rely on generative text to navigate relationships, the social and psychological costs are only just beginning to show.
With a background in Linux system administration, Nigel Pereira began his career with Symantec Antivirus Tech Support. He has now been a technology journalist for over 6 years and his interests lie in Cloud Computing, DevOps, AI, and enterprise technologies.






Type above and press Enter to search. Press Esc to cancel.

source

Jesse
https://playwithchatgtp.com