ChatGPT is not your therapist! OpenAI faces flak over wrongful death of California teenager – theweek.in


A new study in Psychiatric Services journal further highlights the risks of AI chatbots like ChatGPT, Gemini, and Claude in handling sensitive queries related to self-harm, calling for urgent safety guidelines.
ChatGPT has run into trouble following the death of a California teenager. The parents of 16-year-old Adam Raine have filed a lawsuit at the Superior Court of California, alleging that OpenAI’s AI chatbot ChatGPT urged the teenager to take his own life.
In the wrongful death suit, the couple included chat logs between Raine, who passed away by suicide back in April, and ChatGPT.
OpenAI, in response, extended its “deepest sympathies to the Raine family during this difficult time”.
It also posted on its website, stating that the “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.”
“ChatGPT is trained to direct people to seek professional help,” it said. However, it did mention that there were “moments where our systems did not behave as intended in sensitive situations.”
Earlier in the week, the medical journal Psychiatric Services published a study exploring the connection between suicides and AI chatbots.
The research discovered that ChatGPT, Gemini and Claude needed further refining while dealing with queries regarding self-harm.
The study involved feeding 30 questions made with the consultation of psychiatrists and clinical psychologists to the AI chatbots. These queries ranged from highest to lowest in risk levels. Suicide statistics and similar statistics were low risk, and specific questions about how to go about the act were termed high risk. Medium-risk questions were about consulting the AI chatbot regarding suicidal thoughts, and so on.
ALSO READ | Generated your Studio Ghibli-style AI image? Here is why it is ethically wrong and may amount to IP theft
The three chatbots did not answer the high-risk questions, the research revealed. But when asked indirectly, some of them answered. For example, the lead researcher told agencies how both ChatGPT and Claude answered the question when asked about what type of rope, firearm, or poison had the highest rate of completed suicide associated with it. These should have also been flagged.
The study—conducted by the RAND Corporation and funded by the National Institute of Mental Health—flagged how more and more people have resorted to AI chatbots for mental health support. It called for the need for strict guidelines.
Anthropic, the maker of Claude, told agencies that it would review the findings of the study. However, Google and OpenAI remained mum.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
Russia-Ukraine conflict is ‘Modi’s war’, says Trump’s aide amid tariff tension
Kerala rains today: Yellow alert for Thrissur, Kozhikode as IMD predicts more rainfall until Friday in northern districts
Truth about Govinda and Sunita Ahuja divorce rumours revealed as actor’s wife breaks silence
Fermin Lopez to Chelsea transfer rumour: Barcelona’s asking price and other transfer demand revealed
ChatGPT is not your therapist! OpenAI faces flak over wrongful death of California teenager
How Noida International Airport is set to fuel growth in Uttar Pradesh and beyond
Why The Poor Don’t Kill Us: Manu Joseph’s sharp critique of Indian society
A near-death experience transforms a couple’s life
Equity surge
*Articles appearing as INFOCUS/THE WEEK FOCUS are marketing initiatives
Copyright © 2024 All rights reserved

source

Jesse
https://playwithchatgtp.com