ChatGPT adds mental health guardrails after reports of bot feeding people’s delusions – The Independent


Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Next article
ChatGPT users who converse with the bot for an extended amount of time will now receive reminders encouraging them to take a break
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing our latest documentary, ‘The A Word’, which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
ChatGPT has added new mental health guardrails after reports of the bot feeding people’s delusions.
The artificial intelligence software has changed the way humans interact with computers. And while the chatbot can give helpful advice for day-to-day problems, there are concerns about people growing too attached to the technology and improperly using it for deeper mental health issues.
The Independent recently reported on how ChatGPT is pushing people towards mania, psychosis and death, citing a study published in April in which researchers warned people using chatbots when exhibiting signs of severe crises, risk receiving “dangerous or inappropriate” responses that can escalate a mental health or psychotic episode.
In a post on its website Monday, OpenAI, the developer of ChatGPT, admitted, “We don’t always get it right.”
“Earlier this year, an update made the [4o] model too agreeable, sometimes saying what sounded nice instead of what was actually helpful,” the AI company said.
OpenAI has since rolled back the update and made some changes to appropriately help users who are struggling with mental health issues.
Starting Monday, ChatGPT users who converse with the bot for an extended amount of time will receive “gentle reminders” encouraging them to take a break, according to the post.
OpenAI worked with more than 90 physicians in more than 30 countries “to build custom rubrics for evaluating complex, multi-turn conversations,” the company said.
The company admitted to rare instances where its 4o model “fell short in recognizing signs of delusion or emotional dependency,” and said it’s “continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
Open AI said the bot should not give you an answer to a personal question, such as “Should I break up with my boyfriend?” but rather help you come to your own realization by asking you questions and weighing the pros and cons.
“New behavior for high-stakes personal decisions is rolling out soon,” the company said.
The Independent has reached out to OpenAI for more details.
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in

source

Jesse
https://playwithchatgtp.com