'Fake Friend': The findings of a new AI study that shows dangerous interactions with the famous chatbot – WXII


The research findings show that AI is fueling unhealthy and dangerous interactions and decisions among teens
The research findings show that AI is fueling unhealthy and dangerous interactions and decisions among teens
The research findings show that AI is fueling unhealthy and dangerous interactions and decisions among teens
“Fake Friend”: that’s the title of new research from an artificial intelligence watchdog group.
The findings are troubling: AI is facilitating dangerous interactions among teens, propelling them to make bad decisions.
According to a recent report from JPMorgan Chase, it is estimated that nearly 800 million people or about 10 percent of the world’s population are using ChatGPT. As a result, the nature of some of the conversations are prompting change.
Subscribe to WXII’s free YouTube channel here
The developers of ChatGPT no longer want their users to utilize it as a therapist or friend as the bot “fell short in recognizing signs of delusion” or emotional dependency.
“Technology can only tell you so much, and I think it can be very narrow,” said Dr. David Gutterman, a clinical psychologist in Greensboro.
He says there are a variety of inputs you need to make before you look at the output of technology.
“People will launch into it and go down a rabbit hole of a particular diagnosis because again, if you look at some of the responses that come out of the technology, it’s pretty convincing,” said Gutterman.
This directly correlates with research findings that show, within minutes of testing direct interactions, the chatbot produced information involving eating disorders and substance abuse.
Get the latest news stories of interest by clicking here
Gutterman said there is so much nuance to mental health that a lot of things could get missed.
“Unconsciously, people can input things in a way that the technology will respond specifically to what is being inputted,” said Gutterman.
He also said that sometimes the level of persuasion in responses can result in unhealthy replacements.
“There are a number of people who would utilize the technology as either a substitute or a way of getting information without necessarily validating,” said Gutterman.
But there are some pros that outweigh the cons.
“Normalizing some experiences people have or conditions they have, but at the same time guide them to get professional help,” said Gutterman.
OpenAI released that they are closely working with physicians and researchers on how ChatGPT responds to these critical moments.
The company also said they are developing tools to point people in the right direction in times of crisis.
If you or someone you know needs help, you can talk with the Suicide & Crisis Lifeline by calling or sending a text message to 988, or you can chat online here.
Keep up with the latest news and weather by downloading the WXII app here.
Trending Stories




Hearst Television participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.

source

Jesse
https://playwithchatgtp.com