ChatGPT as a therapist: Should you trust AI chatbots for your mental health issues? | Mint – Mint

ChatGPT found itself at the centre of debate when Lilian Weng expressed her emotional satisfaction with the chatbot. “Never tried therapy before but this is probably it?” the manager at OpenAI posted on the platform, X (formerly Twitter). Critics swiftly responded, voicing concerns over the trivialisation of mental well-being.
With burgeoning startups launching AI apps for mental health support, this sector remains mired in debates. Cher Scarlett, an activist and programmer, responded to Weng’s post, highlighting the irreplaceable depth of human therapeutic sessions.

Also Read: ChatGPT can now talk to you. Here’s how to use the newly released features by OpenAI

Research from the Massachusetts Institute of Technology (MIT) and Arizona State University sheds light on these reactions. This study had over 300 individuals interact with mental health artificial intelligence (AI) tools, with varying prompts regarding the chatbot’s nature, as per AFP.
Some were told the AI was compassionate, others that it was deceitful, and the rest were given no specifics. Results indicated those primed to believe in a compassionate chatbot viewed it as more trustworthy. Pat Pataranutaporn, one of the report’s authors, commented that one’s perception largely determines their AI experience.
Mint is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest financial insights! Click here!
Users of the Replika app, marketed for mental health benefits, have reported unsettling interactions, further fanning the flames of concern. Koko, a US nonprofit, also shared findings on an experiment using GPT-3, concluding that AI-driven responses lacked therapeutic depth.

AI Chatbots in therapy

Chatbots aren’t new to the therapeutic scene. The inception of chatbot technology in the 1960s was symbolised by ELIZA, designed to mimic psychotherapy. The MIT and Arizona study incorporated ELIZA and found that, despite its age, users with a positive perspective still deemed it trustworthy.

Also Read: ChatGPT maker OpenAI plans to start making its own AI chips. Here’s why it’s important

Not all chatbots offer genuine interactions, as per critics, pointing to concerns about the transparency of AI’s therapeutic claims. David Shaw from Basel University shared similar sentiments, suggesting a more critical approach when engaging with these chatbots, as per AFP.
While it’s no shocker that a manager from OpenAI would endorse ChatGPT, it’s essential to tread cautiously. As per the MIT and Arizona research, it’s crucial to calibrate society’s expectations of AI, ensuring a clear line between genuine therapeutic sessions and AI interactions.
(With AFP inputs)
“Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!” Click here!
Download the Mint app and read premium stories
Log in to our website to save your bookmarks. It’ll just take a moment.
You are just one step away from creating your watchlist!
Oops! Looks like you have exceeded the limit to bookmark the image. Remove some to bookmark this image.
Your session has expired, please login again.
You are now subscribed to our newsletters. In case you can’t find any email from our side, please check the spam folder.
This is a subscriber only feature Subscribe Now to get daily updates on WhatsApp

source

Jesse
https://playwithchatgtp.com