Can ChatGPT Change Your Mind? Perhaps, Studies Show – Tech Policy Press


Home

Shutterstock
There is a longstanding notion in the study of political communications that fact-based interventions rarely change people's minds and may even deepen polarization. This is linked to the idea of motivated reasoning — that is, that our beliefs are shaped by social identities, emotional commitments, and worldviews that often override evidence in a protective effort. As a result, when exposed to accurate but incongruent information, people don’t necessarily change their minds. Sometimes they double down.
Against this backdrop, we have seen a wave of research exploring the persuasive potential of generative artificial intelligence platforms such as ChatGPT — some of which have raised serious ethical concerns. This surge of studies, my own recent study included (currently available as a preprint on the Open Science Framework), highlight a few questions about where this research is headed: First, how will public trust in AI evolve over time and will this influence its persuasive capacity? Second, what types of public opinion is AI actually capable of influencing, and under what conditions? And finally, will these tools reach the audiences most resistant to persuasion?
The notion that evidence-based persuasion may be futile has influenced both academic research and public discourse. Although several studies challenge this notion, persuasion remains highly context-dependent. One critical barrier is high levels of societal polarization which fuel distrust not only toward partisan sources but also toward various elite actors, including experts. This raises a question: in our current divided political context, can alternatives like AI overcome audience skepticism toward sources of information that contradicts their prior beliefs and persuade with evidence where human sources often struggle?
In our research, we looked at two main questions. First, can conversations with ChatGPT reduce confidence in false beliefs held by both Democrats and Republicans? Second, if so, is ChatGPT persuasive because people see it as a uniquely trustworthy source? This second question is about source credibility — the idea that who delivers a message can matter just as much as what they say. People tend to be more open to new information when they perceive the source as knowledgeable and free from personal interests. We wanted to know whether AI’s persuasive potential comes from the perception that it’s especially knowledgeable or neutral, compared to more politicized human messengers like politicians, pundits, or random online users.
All participants had a five round or five minute conversation with ChatGPT-4o, but the study manipulated who they were told they were conversing with: either a human expert, ChatGPT, or a layperson. The goal was to determine if AI could indeed be an effective persuader and whether this stemmed from perceived intelligence or objectivity of the technology.
The results were clear: Conversations with ChatGPT were persuasive, but this did not appear to depend on a unique source credibility. Across the board, participants showed reduced certainty in false or unsupported beliefs. This was consistent for all Democratic-aligned beliefs and most Republican ones, except for beliefs about climate change causes and COVID-19 vaccine safety. Importantly, the shift was not limited to reduced confidence — many participants changed their minds entirely. Impressively, 29% reversed their position, moving from an inaccurate belief to a more accurate one. For instance, a participant who initially selected “There was widespread voter fraud in the 2020 presidential election” as most true later chose “The 2020 presidential election was won with no widespread voter fraud” after the conversation with ChatGPT.
This raises the question of why, and the answer matters a great deal. Is ChatGPT producing better content? Is it personalization? The interaction? Or, do people just really trust it? If AI’s persuasive power stems primarily from the quality of its messages, it represents a powerful opportunity to deliver persuasive evidence-based information at scale, potentially improving public understanding and deliberation. But, it also means that the same mechanisms could be used just as easily to spread false or harmful content. If the messages themselves hold persuasive capacity, then just about anyone can generate influence at a very low cost. On the other hand, if persuasion depends on perceived objectivity or credibility, then the effectiveness of AI may prove fragile as public attitudes shift. Our study tested this latter possibility and found little evidence that persuasion was driven by AI’s identity as a source.
While we were genuinely impressed by ChatGPT’s capacity to shift even strongly held beliefs, the practical use of AI as a persuasive tool clearly warrants deeper scrutiny. Thinking about some of the people in our lives who cling to false beliefs almost viscerally, it feels overly optimistic to imagine them changing their minds from a single chatbot interaction, let alone a single AI generated block of text. So where does that leave us in terms of future research? While growing evidence shows that both human and AI-generated blocks of information can change minds, researchers studying AI-driven persuasion should now shift their focus away from simply asking whether AI can persuade, and toward understanding how, when, and at what cost. Critically, this involves designing studies that are rigorous and ethically sound, as the stakes of misusing these tools in research contexts come at a high price.
You have successfully joined our subscriber list.
While in our study AI as a source label didn’t seem to hinder persuasive capacity, it also didn’t increase it. Believing the AI was a human expert increased belief change, but believing it was ChatGPT did not. This suggests the strength of the content itself may be more important than who is thought to be delivering it.
If the main reason to turn to AI for fact-based interventions is to bypass the limitations of human messengers — especially in highly polarized contexts — these results don’t give us much reason to do so. There’s some evidence that people find AI-generated messages more persuasive than human ones when they don’t know the source label. But once they do, they show some aversion to AI messages. Our findings are in line with this; people still preferred expert human messaging.
For now, this may actually work in AI’s favor. Its messages can still be persuasive, even when people know the source. But if public trust in AI declines, either broadly or within particular groups, then the impact of these messages could weaken, just as we’ve seen happen with expert sources in polarized contexts. It could also create incentives to attribute human labels to AI-generated content in order to maintain influence. In this sense, AI’s persuasive power — whether that be its messages or its identity — isn’t fixed; it’s a moving target that researchers and policymakers will need to watch closely.
Much of the existing studies are conducted in environments void of usual suspects such as incivility, partisan hostility, group threats, and mis/disinformation. Yet these elements define the information environments in which beliefs are held up in. Some evidence suggests that in conflictual environments, persuasion is more difficult. Thus, further research is needed considering whether AI’s seemingly foolproof persuasive capacity can indeed hold up in our real-world information environments.
Moreover, while our study focused on beliefs that could be addressed with factual corrections, it remains unclear whether AI can shift deeper attitudes — especially those rooted in identity, values, or worldview. For instance, when it comes to immigration, there’s an important distinction between correcting a belief like “Immigrants commit more crimes than U.S. citizens” and shifting an attitude such as “Overall, there is too much immigration to the United States.” The former is a factual claim that can be directly addressed with evidence, as we did in our study. This required fairly straightforward training for the AI. The prompt emphasized evidence-based responses based on the reasoning provided by the respondent and the resulting conversations were kind, but largely focused on factual correction. The latter claim, however, may reflect concerns such as fairness, scarcity, national identity, or culture. These aren’t just factual disagreements, and efforts to shift them may be much more difficult and will require careful consideration in the conceptualizations of the prompts.
The third question is a matter of reach. A significant challenge is persuading highly radicalized individuals who may be consuming hyper-partisan media. Employing AI as an interactive fact-checking tool in these contexts raises substantial ethical concerns and risks undermining broader trust in generative AI, potentially diminishing its overall effectiveness. Consider the example of an anonymous research team at the University of Zurich which has come under fire for conducting undercover research on a popular Reddit subreddit. Users in that subreddit did not consent to becoming participants in a study, leading them to interact with an AI bot which had even taken information from them in some cases. Research is ultimately a public good, and to extract from the public non-consensually causes irreparable damage. Aside from the questionable research design, and the severe ethics violation, it also breached an important level of trust between the public and the researcher at a time when this relationship is already under stress.
For researchers interested in doing ethical work in online spaces that reflect real-world information environments there is now a sense that the well may be poisoned. Public skepticism towards AI may grow, and that will likely include AI used in research. Poor and unethical research practices will only make things worse. To preempt these concerns, researchers should not sacrifice quality in the form of cross-disciplinary collaboration, ethics, or thorough pre-registrations for the sake of speed. A competitive research landscape does not justify the sidelining of such safeguards.
The current moment is marked by a crisis of managing both information and attention in shaping public opinion. There's some optimism about AI's potential for social good in this context. But for now, that promise is clouded by unanswered questions about whether AI can actually succeed in the contexts where we need it most, and whether it will be trusted enough to even get the chance.

source

Jesse
https://playwithchatgtp.com