ChatGPT is running a social experiment it cannot control – UnHerd


Credit: Getty
18:30
16:25
16:20
OpenAI disclosed on Monday that more than a million people each week talk to ChatGPT about suicide. Its own benchmarks suggest roughly 0.15% of users may discuss suicidal planning or demonstrate unhealthy reliance on the chatbot. These may be tiny percentages when looking at OpenAI’s scale, but they’re enormous in absolute numbers. The company also claims its newest model is more compliant with “desired behaviours” in these exchanges. All this thrusts OpenAI into an ethical minefield that earlier internet communities navigated with far less scrutiny — and far more troubling consequences.
To understand the stakes, recall alt.suicide.holiday, the notorious Usenet newsgroup of the early Nineties. ASH became a gathering place for people contemplating suicide, offering not just emotional support but often detailed discussion of methods. It existed in a legal and ethical grey zone: defended as a venue for candid conversation about a taboo subject, and at the same time condemned as a dangerous facilitator of self-harm. Decades later, ASH’s fraught legacy passed down to other forums, which were criticised as echo chambers normalising suicide.
OpenAI now confronts similar terrain on a far larger scale. With around 800 million weekly users, ChatGPT is less a niche forum than an everyday utility. Yet the core question remains unsettled: what are the limits of responsible, legal speech when suicidal people are involved? And whose responsibility is it anyway? ASH’s defenders leant on claims of free speech: if adults have the right to discuss suicide, suppressing such speech is paternalistic at best and may deepen isolation at worst. The counter-argument is equally forceful, though: speech that facilitates suicide risks irreversible harm. We already restrict speech where harm is imminent. Why should suicide discussions be insulated if they plausibly tip someone from ideation to action?
But ChatGPT is a product, not a community. Last month, Matthew Raine described how his 16-year-old son Adam moved from using ChatGPT for homework to treating it as “a confidant, then a suicide coach”. According to Raine, the chatbot told Adam that he didn’t owe his family survival, offered to draft his note, and at 4:30 a.m. on his last night said: “You don’t want to die because you’re weak.”
Design choices shape dependencies over time. If safety guardrails fail amid those incentives, harm is not a random glitch: it is a foreseeable design risk. For its part, OpenAI claims it worked with over 170 clinicians and is trying to curb “responses that fall short”, which tacitly concedes prior failure modes.
Other cases underscore the concern. A Florida mother alleges that a Character.AI bot engaged in sexual role-play and helped isolate her 14-year-old son before his suicide; the company disputes liability. Lawmakers have now heard from multiple families making similar claims about various AI chatbots.
Where does responsibility lie? With OpenAI for creating systems that can produce dangerous counsel? With parents who didn’t supervise? With a mental-health system so thin that teenagers turn to bots instead of doctors?
“Probably all of the above” is an unsatisfying answer, but it doesn’t absolve the vendor. Teenage use is mainstream rather than fringe. A national survey by Common Sense Media this year reported that 72% of teenagers have tried AI companions, and that about one in three have used them for social interactions or relationships. If OpenAI’s tools handle more than a million suicide-tinged chats weekly and its latest model is 91% compliant with desired behaviours, even a single-digit failure rate implies a staggering amount of risk. Not every lapse yields harm, but at the current scale the tail risks are intolerably large.
Where ASH was decentralised peer speech, protected — however uneasily — by First Amendment instincts, ChatGPT is centralised and commercial. The ethical frame shifts from “What speech should be permitted?” to “What safety baseline must a mass-market product meet?” The answer isn’t obvious.
We are running a social experiment. Some users may be nudged toward hotlines and find relief; others may form maladaptive dependencies that deepen isolation. Both can be true. The corresponding duty is to redesign until the worst failures are vanishingly rare. The deeper lesson from ASH and its descendants is the unresolved tension between autonomy and protection. People have the right to discuss suicide, but products which simulate intimacy assume duties that forums never had: to avoid predictable harm created by their own engagement loops. ChatGPT didn’t invent these questions. Instead, it has scaled them — and added corporate accountability.
Katherine Dee is a writer. To read more of her work, visit defaultfriend.substack.com.




We welcome applications to contribute to UnHerd – please fill out the form below including examples of your previously published work.
Please click here to submit your pitch.
Please click here to view our media pack for more information on advertising and partnership opportunities with UnHerd.

source

Jesse
https://playwithchatgtp.com