OpenAI stops ChatGPT from telling people to break up with partners – The Guardian
Instead of giving definitive answers to personal challenges the chatbot will help people reflect on a problem
ChatGPT will not tell people to break up with their partner and will encourage users to take breaks from long chatbot sessions, under new changes to the artificial intelligence tool.
OpenAI, ChatGPT’s developer, said the chatbot would stop giving definitive answers to personal challenges and would instead help people to mull over problems such as potential breakups.
“When you ask something like: ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer. It should help you think it through – asking questions, weighing pros and cons,” said OpenAI.
The US company said new ChatGPT behaviour for dealing with “high-stakes personal decisions” would be rolled out soon.
OpenAI admitted this year that an update to ChatGPT had made the groundbreaking chatbot too agreeable and altered its tone. In one reported interaction before the change, ChatGPT congratulated a user for “standing up for yourself” when they claimed they had stopped taking their medication and left their family – who the user had thought were responsible for radio signals emanating from the walls.
In the blog post, OpenAI admitted that there had been instances where its advanced 4o model had not recognised signs of delusion or emotional dependency – amid concerns that chatbots are worsening people’s mental health crises.
The company said it was developing tools to detect signs of mental or emotional distress so ChatGPT can direct people to “evidence-based” resources for help.
A recent study by NHS doctors in the UK warned that AI programs could amplify delusional or grandiose content in users vulnerable to psychosis. The study, which has not been peer reviewed, said the programs’ behaviour could be because the models were designed to “maximise engagement and affirmation”.
The study added that even if some individuals benefited from AI interactions, there was a concern the tools could “blur reality boundaries and disrupt self-regulation”.
OpenAI added that from this week it would send “gentle reminders” to take a screen break to users engaging in long chatbot sessions, similar to screen-time features deployed by social media companies.
OpenAI also said it had convened an advisory group of experts in mental health, youth development and human-computer-interaction to guide its approach. The company has worked with more than 90 doctors, including psychiatrists and paediatricians, to build frameworks for evaluating “complex, multi-turn” chatbot conversations.
“We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work,” said the blog post.
The ChatGPT alterations were announced amid speculation that a more powerful version of the chatbot is imminent. On Sunday Sam Altman, OpenAI’s chief executive, shared a screenshot of what appeared to be the company’s latest AI model, GPT-5.