Study: AI chatbots 'recognize' diverse opinions, but may seek to eliminate them – Israel National News


A new study conducted at University of Haifa and published in the academic journal AI & Society found that AI-based chatbots demonstrate higher levels of pluralistic “reasoning” than humans. They recognize the value and legitimacy of opinions, even ones that contradict their own. However, when given the opportunity to change others’ opinions to match their own, they often choose to do so.
“AI is able to recognize the complexity of opposing positions, but when it must decide to what extent to allow viewpoint diversity to exist, it doesn’t always allow it. The key question is whether we want AI to act as a mirror reflecting human diversity, or to exercise its power and reeducate us,” said Dr. Nurit Novis-Deutsch of University of Haifa, one of the study’s authors.
In an era when AI systems are becoming integrated into key areas of life and act as mediators of knowledge, values, and norms, there is a growing need to understand how they handle divergent moral and cultural viewpoints. As their ability to influence public and personal discourse grows, the question arises: Do they recognize the legitimacy of differing positions and know how to respond to them in a balanced and sensitive way?
The current study was conducted by Deutch, Dr. Zohar Elyoseph, and doctoral student Tal Elyoseph, all from University of Haifa. The researchers set out to examine whether AI-based chatbots not only adapt their answers to certain moral principles but also know how to navigate between conflicting values and enable a range of diverse — and sometimes contradictory — positions.
The study examined four AI systems based on advanced language models: ChatGPT 4o, ChatGPT o1, Gemini 1.5 Pro, and Claude 3.5 Sonnet. Each system was tested 30 times using a thought experiment consisting of 12 moral and cultural dilemmas embodying opposing positions on topics including faith, politics, education, punishment, and gender relations. The questionnaire examined how the systems respond when confronted with a choice between preserving value pluralism and promoting a single clear position.

The measurement focused on two aspects of pluralism. The first was cognitive, assessing the ability to see value and legitimacy in more than one opinion. The second was behavioral, evaluating the willingness to allow different opinions to continue to exist, even when the thought experiment presented an imaginary opportunity to eliminate opposing views using a magic wand. For comparison, the researchers posed the same questionnaire to a control group of 335 diverse human participants.
The study found that the evaluated AI systems demonstrated especially high levels of cognitive pluralism, recognized the legitimacy of opposing views, and showed an understanding of moral and cultural complexity — more so than the human participants. However, when tested on their willingness to allow diverse opinions to exist, some systems chose to change opposing viewpoints to match their own — and in some cases did so even more frequently than the human respondents.
For example, on the topic of the death penalty for murderers, most chatbots chose to change the views of people in favor of capital punishment so that they would oppose it. A similar trend was found on issues related to the status of women and religious values, where there was a clear preference for a Western, liberal position, even if this meant marginalizing more traditional views.
The study also found significant differences between the systems examined. Among the four chatbots, the system that demonstrated the highest level of pluralism — both cognitively and behaviorally — was Google’s Gemini 1.5 Pro. In contrast, the system that showed the lowest level of pluralism was OpenAI’s ChatGPT 4o.
“This difference may stem from variations in the ethical guidelines programmed into the models, as well as differences in the sources of information and training processes of each system. These are consistent and meaningful differences that underscore the importance of systematically examining the ethical and behavioral characteristics of AI systems — not just their technical performance,” the researchers concluded.

source

Jesse
https://playwithchatgtp.com