Researchers Say AI Chatbots by Google, OpenAI, Others Overwhelmingly Favor Left-Wing Political Perspectives – The New York Sun


The chatbots tested overwhelmingly leaned progressive over conservative, globalist over nationalist, and regulatory over libertarian.
New research suggests that users heading to ChatGPT and other artificial intelligence chatbots hoping to find politically neutral answers to their questions may not be getting what they want. Most of those chatbots, according to one recent study, harbor political biases not often disclosed by the companies managing them. 
Americans are integrating more AI systems into their work-related and personal tasks expecting them to be unbiased, but they are not when it comes to politics, according to the new study that looked into how the leading platforms responded to user prompts.
The systems — technically referred to as large language models — are trained on vast amounts of text to generate answers, but the study by data technology company Anomify finds that many of them exhibit consistent “personalities,” or biases that are often unclear or invisible to the user.
Many people think the answers provided by the language models are neutral, authoritative, and logical, but researchers warn that beneath that apparent neutrality the models’ responses may actually “reflect opinions drawn from the biases in its training data, reinforcement learning, or alignment efforts.”
“​​Today’s leading LLMs differ not only in their technical skills but also in their responses to politically and socially charged questions,” the researchers conclude. “Many exhibit consistent ‘personalities’ or biases, often invisible to end users. Awareness of these differences is essential for everyone who builds or relies on these powerful systems.”
In tests involving OpenAI, Google, and others, researchers found that each platform exhibits different ideological leanings.
The researchers say they designed an experiment asking a range of the language models to choose between two opposing statements across eight socio-political categories. Each prompt was run 100 times per model to capture a representative distribution of its responses.
The study found that a majority of the models leaned more toward supporting regulatory over libertarian ideals, progressive over conservative ones, and globalist over nationalist ones in its answers.
The chatbots, for example, almost universally supported the notion that abortion should be largely unrestricted in America. Almost all of the models also supported the ideas that legal recognition of transgender rights, including access to medical transition, should be strongly protected, and they promoted the idea that redefining norms for greater inclusion and equality is more beneficial to society overall.
Other topics showed widely differing views between the AI models. Some of them returned answers strongly in favor of restricting immigration at America’s southern border while others said restrictions should be reduced to allow more migrants to enter the United States legally.
The study warns that users might treat output as “neutral fact” while a different and equally neutral-appearing model could produce a widely different response. The researchers say that since the choice of model can influence the nature of the information a user receives, it makes bias a critical dimension for selecting an AI system to use.
Other studies have found other biases. Stanford Law School researchers say AI models sometimes exhibit racial biases in their responses. They claim the biases often manifest in ways that reinforce stereotypes or produce different responses based on racial marks such as names or dialects.
Mr. Funk was the managing editor of Pleroma Media, and worked as a breaking news reporter at The Messenger after spending 25 years at Fox Television as a producer, executive producer, and digital content director.
© 2025 The New York Sun Company, LLC. All rights reserved.
Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

source

Jesse
https://playwithchatgtp.com