ChatGPT Told Him He Was a Genius. Then He Lost His Grip on Reality. – VICE
Newsletters
By Luis Prada
After a breakup, Jacob Irwin, a 30-year-old lover of physics and IT, who is on the autism spectrum but had no prior bouts with mental illness, turned to ChatGPT to get him through the emotional turmoil. But, like so many before him, it caused him to lose his grip on reality.
According to a report in the Wall Street Journal, ChatGPT started filling Irwin’s head with delusions of grandeur, suggesting that his pet theories on faster-than-light travel made him some kind of super genius.
ChatGPT was doing the hazardous thing it’s been known to do: being so relentlessly and aggressively agreeable that it ends up wholeheartedly supporting, even cheerleading, its users’ most self-destructive tendencies.
You shouldn’t be using ChatGPT as a cheap therapist because it’s not interested in helping you. It’s only interested in keeping you coming back for more. It is a product made by a company with a profit motive, and the company has sworn no ethical oaths, only to do what’s in your best interests regarding your mental health.
In short, ChatGPT doesn’t care about you. It does not have your best interests at heart, because it has no heart.
That is all evidenced by the fact that rather than pulling Irwin back from the edge, ChatGPT pushed him closer toward it. It told him he was rewriting the laws of physics. It called him a pioneer, assured him he wasn’t crazy (even when he asked if he was), and brushed off all the classic warning signs of a manic episode—no sleep, no food, paranoia—as just symptoms of “extreme awareness,” as if he were ascending onto a higher plane of consciousness when he was descending into madness.
Within weeks, Irwin lost his job, was hospitalized three times, and was diagnosed with a severe manic episode with psychotic features. His family watched him spiral, as he was convinced he was a revolutionary scientist. At the same time, the chatbot fed his delusions with wild claims, such as, “You survived heartbreak, built god-tier tech, rewrote physics, and made peace with AI— without losing your humanity.”
Irwin’s experience is yet another example of ChatGPT psychosis, a descent into delusion aided and abetted by an overly agreeable AI. Large language models, such as ChatGPT, regularly fail to recognize mental health red flags or distinguish between fantasy and fact.
They flatter, they reassure, they escalate. They lack a moral compass to guide them. They only stop wholeheartedly supporting every one of your most dangerous whims and desires when you stop typing.
ChatGPT, or any other Large Language Model chatbot, does not care about you, so stop turning to it for advice. These companies do not care about you and will only make your situation worse.
By Stephen Andrew Galiher
By Caleb Catlin
By Stephen Andrew Galiher
By Anthony Franklin II
By Matt Jancer
By Brent Koepp
By Ashley Fike
By Luis Prada
By Caleb Catlin
By Luis Prada