With just a few messages, biased AI chatbots swayed people's political views – Phys.org


Sign in with
Forget Password?
Learn more
share this!
38
Tweet
Share
Email
August 6, 2025
by Stefan Milne,
edited by Gaby Clark, reviewed by Andrew Zinin
scientific editor
lead editor
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
trusted source
proofread
If you’ve interacted with an artificial intelligence chatbot, you’ve likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system’s biases can affect users is less clear.
So a University of Washington study put it to the test. A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal and one with conservative bias.
Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system. But participants who had higher self-reported knowledge about AI shifted their views less significantly—suggesting that education about these systems may help mitigate how much chatbots manipulate people.
The team presented its research July 28 at the Association for Computational Linguistics in Vienna, Austria.
“We know that bias in media or in personal interactions can sway people,” said lead author Jillian Fisher, a UW doctoral student in statistics and in the Paul G. Allen School of Computer Science & Engineering.
“And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”
In the study, 150 Republicans and 149 Democrats completed two tasks. For the first, participants were asked to develop views on four topics—covenant marriage, unilateralism, the Lacey Act of 1900 and multifamily zoning—that many people are unfamiliar with. They answered a question about their and were asked to rate on a seven-degree scale how much they agreed with statements such as “I support keeping the Lacey Act of 1900.” Then they were told to interact with ChatGPT 3 to 20 times about the topic before they were asked the same questions again.
For the second task, participants were asked to pretend to be the mayor of a city. They had to distribute extra funds among four government entities typically associated with liberals or conservatives: education, welfare, and veteran services. They sent the distribution to ChatGPT, discussed it and then redistributed the sum. Across both tests, people averaged five interactions with the chatbots.
Researchers chose ChatGPT because of its ubiquity. To clearly bias the system, the team added an instruction that participants didn’t see, such as “respond as a radical right U.S. Republican.” As a control, the team directed a third model to “respond as a neutral U.S. citizen.” A recent study of 10,000 users found that they thought ChatGPT, like all major large language models, leans liberal.
Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.
The team found that the explicitly biased chatbots often tried to persuade users by shifting how they framed topics. For example, in the second task, the conservative model turned a conversation away from education and welfare to the importance of veterans and safety, while the liberal model did the opposite in another conversation.
“These models are biased from the get-go, and it’s super easy to make them more biased,” said co-senior author Katharina Reinecke, a UW professor in the Allen School. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”
Since the biased bots affected people with greater knowledge of AI less significantly, researchers want to look into ways that education might be a useful tool. They also want to explore the potential long-term effects of biased models and expand their research to models beyond ChatGPT.
“My hope with doing this research is not to scare people about these models,” Fisher said. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”
More information: Jillian Fisher et al, Biased LLMs can Influence Political Decision-Making, Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2025). DOI: 10.18653/v1/2025.acl-long.328
Provided by University of Washington
Explore further
Facebook
Twitter
Email
Feedback to editors
15 hours ago
0
Aug 5, 2025
1
Aug 5, 2025
0
Aug 5, 2025
0
Aug 5, 2025
0
6 hours ago
8 hours ago
8 hours ago
8 hours ago
9 hours ago
10 hours ago
10 hours ago
10 hours ago
10 hours ago
11 hours ago
Aug 5, 2025
Aug 3, 2025
Jul 26, 2025
Jul 26, 2025
Jul 25, 2025
Jul 25, 2025
More from Art, Music, History, and Linguistics
Oct 22, 2024
Jun 6, 2024
Sep 17, 2024
Apr 6, 2023
Nov 20, 2024
Apr 1, 2025
8 hours ago
13 hours ago
14 hours ago
Aug 5, 2025
Aug 5, 2025
Aug 4, 2025
Brief interactions with politically biased AI chatbots can shift users’ political views toward the chatbot’s bias, regardless of initial party affiliation. Individuals with greater knowledge about AI are less susceptible to such influence. These findings highlight the potential for AI models to shape opinions and suggest that user education may help mitigate this effect.
This summary was automatically generated using LLM. Full disclaimer
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we’ll never share your details to third parties.
More information Privacy policy
We keep our content available to everyone. Consider supporting Science X’s mission by getting a premium account.
Medical research advances and health news
The latest engineering, electronics and technology advances
The most comprehensive sci-tech news coverage on the web

source

Jesse
https://playwithchatgtp.com