ChatGPT's voice feature sparks fierce debate about using it for therapy – Business Insider

Jump to

OpenAI is rolling out a new voice feature for ChatGPT over the next two weeks, which will put an AI voice companion in the pockets of people seeking a more human conversation with the chatbot.
There’s one thing it probably shouldn’t be mistaken for: an on-the-go therapist. OpenAI, however, doesn’t seem to have any qualms with users opening up ChatGPT for a heart-to-heart.
On Tuesday, Lilian Weng, head of safety systems at OpenAI, wrote on X that she “just had a quite emotional, personal conversation” with ChatGPT in voice mode. She said she talked to it about the mental strains that come with a high-flying career, including stress and maintaining a work-life balance.
“Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool,” Weng wrote.
Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool. https://t.co/9S97LPvBoS
Her experience was amplified by OpenAI president, chair, and cofounder Greg Brockman, who reposted the anecdote on X and said, “ChatGPT voice mode is a qualitative new experience.”
A new experience it may be. But is it a sensible one? Many in the AI and wider tech community say not. OpenAI and Weng did not immediately respond to Insider’s request for comment made outside of normal working hours.
Timnit Gebru, an influential computer scientist specializing in AI ethics research, responded to Weng’s post by raising concerns that the AI safety team at OpenAI had seemingly given short shrift to the issues posed by chatbots being used for therapy.
Specifically, Gebru drew attention to Eliza, a computer program devised in the 1960s by MIT professor Joseph Weizenbaum, as a kind of psychotherapist that would take comments inputted by a human and reframe them as a question to the user.
So the “AI Safety” people at the chatbot company will not be the ones who think about, I don’t know, the ELIZA chatbot from 60 years ago and the issues that arose let alone with this one? https://t.co/Yg9vgoVTTS
It’s very similar to a technique known as Socratic questioning. This is used by therapists to challenge possibly irrational or questionable assumptions a patient might have about themselves or others, which cause distress, low mood, or anxiety.
“I feel like no one likes me,” you might have told Eliza. “Why do you feel like no one likes you?” Eliza might have responded. Except Eliza wasn’t programmed with the years of expertise a therapist has to then guide a patient through a long-term journey of resolution and recovery.
For Weizenbaum, that made Eliza a dangerous tool, particularly after he saw how easily people were lulled into thinking of the chatbot as a viable alternative to a therapist. This led him to spend much of the remainder of his career warning about the potential harms.
There have, of course, been other cases where chatbots have been used as therapy. At a time when Americans are feeling lonelier than ever, and the availability of human therapists is squeezed, chatbots can offer aid at an initial stage. But their limits must be clearly communicated.
Tom Insel, a psychiatrist and neuroscientist, wrote earlier this year that for treatments like cognitive behavior therapy that are “highly structured,” future versions of AI chatbots may be able to deliver “the same intervention, although research has shown that sustained engagement usually requires involvement of a human therapist.”
It’s why OpenAI would do well to heed the lessons from Eliza. Margaret Mitchell, an AI ethics researcher and scientist who previously worked at Google, responded to OpenAI’s Weng with a simple question: “……you know these models can persuade people towards harm, yeah?”
Mehtab Khan, fellow at Harvard University’s Berkman Klein Center for Internet & Society, described Weng’s post as “a dangerous characterization of an AI tool, and also yet another example of anthropomorphizing AI without thinking about how existing rules apply/don’t apply.”
Or, as Calli Schroeder, global privacy counsel at The Electronic Privacy Information Center, put it: “Holy fucking shit, do not use ChatGPT as therapy.”
Read next

source

Jesse
https://playwithchatgtp.com