ChatGPT makers respond to growing number of people using AI as a therapist – The Independent
Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Next article
System has been accused of failing to recognise when users are in mental or emotional distress and respond appropriately
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing our latest documentary, ‘The A Word’, which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
OpenAI, the creator of ChatGPT, says it is rushing to fix the systems’ difficulties with helping users in mental distress.
In recent months, increasing reports have suggested that people are turning to the system as a kind of therapist, for help with personal problems and mental health issues. But ChatGPT is often overly encouraging of users who consult it, encouraging people’s delusions and failing to challenge their assumptions.
Now OpenAI says that it is responding to those concerns with a range of updates and research intended to help make the system less dangerous when it is used by people are experiencing mental health crises or similar problems.
The company knows “that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress”, it said.
Those changes include improvements to the models to make it better at recognising “signs of delusion or emotional dependency”, it said. Numerous reports have shown that the system could encourage people’s delusions, or allow them to become emotionally attached to the system.
Users will also now be shown alerts if they are having long sessions of sending messages to ChatGPT. The message says that it is “just checking in” and asks whether it is a “good time for a break”.
It will also look to work through questions with people, rather than giving them an answer, especially in “high-stakes personal decisions”. If a user asks whether they should break up with their boyfriend, for instance, it will aim to help them weigh up the decision rather than telling them what to do.
OpenAI also committed to work with experts to improve the system’s response at “critical moments – for example, when someone shows signs of mental or emotional distress”.
That has included working with medical experts, an advisory group on mental health and similar concerns, as well as working with researchers to improve the systems’ ability to spot concerning behaviour and respond to it.
The announcement comes as OpenAI continues to tease the launch of GPT-5, an update to the model that powers ChatGPT. It will be the first major release since GPT-4 was launched in 2023, and OpenAI boss Sam Altman has looked to hype the new version as potentially transformative.
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in