Study shows we can be convinced an AI chatbot is trustworthy – Interesting Engineering

Maximalfocus /Unsplash  
By subscribing, you agree to our and You may unsubscribe at any time.
As human beings, we rely on recommendations or warnings from our friends and family. It gives us an added perspective on what to expect from a particular service, a product, or another human being. As per the latest study, the same is true for the way in which we trust and perceive an AI chatbot.
Researchers from Massachusetts Institute of Technology (MIT) and Arizona State University conducted a study in which they found that even though every person in their sample size of 310 people interacted with the exact same chatbot, their interactions with it were influenced by what they had been told before.
“From this study, we see that to some extent, the AI is the AI of the beholder,” says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group of the MIT Media Lab and co-lead author of the study. “When we describe to users what an AI agent is, it does not just change their mental model, it also changes their behavior. And since the AI responds to the user, when the person changes their behavior, that changes the AI, as well.”
The researchers chose a mental health conversational chatbot based on OpenAI’s GPT-3 large language model. Prior to their interaction, the participants were divided into three groups, and each of these groups was falsely told if the chatbot was empathetic, neutral, or desensitized. Each individual interacted with the chatbot for about 30 minutes to determine if they would recommend it to a friend.
They found that the participants who were told that the AI is empathetic tended to believe that notion and also gave the chatbot a higher rating than those who believed it was desensitized and manipulative. While most people who were told the AI was malicious believed that it was, less than 50% of these people chose to see the good in the AI and thought that the chatbot wasn’t malicious.
The study also revealed a feedback loop between participants’ perceptions of an AI chatbot and that agent’s responses. The sentiment of user-AI conversations became more positive over time if the user believed the AI was empathetic, while the opposite was true for users who thought it was nefarious, according to the press release.
The results also indicate that since preconceived notions can have such a great impact on a user's mind, the individual can be manipulated into believing that an AI model is more capable than it actually is.
“A lot of people think of AI as only an engineering problem, but the success of AI is also a human factors problem. The way we talk about AI, even the name that we give it in the first place, can have an enormous impact on the effectiveness of these systems when you put them in front of people. We have to think more about these issues,” said senior author Pattie Maes, professor of media technology and head of the Fluid Interfaces group at MIT.
“The AI is a black box, so we tend to associate it with something else that we can understand. We make analogies and metaphors. But what is the right metaphor we can use to think about AI? The answer is not straightforward,” added Pataranutaporn.
The study titled ‘Influencing human-AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness’ was published in Nature Machine Intelligence.

source

Jesse
https://playwithchatgtp.com