ChatGPT's Strong Left-Wing Political Bias Unmasked by New Study – SciTechDaily
By
A research study identifies a significant left-wing bias in the AI platform ChatGPT, leaning towards US Democrats, the UK’s Labour Party, and Brazil’s President Lula da Silva.
The artificial intelligence platform ChatGPT shows a significant and systemic left-wing bias, according to a new study by the University of East Anglia (UEA).
The team of researchers in the UK and Brazil developed a rigorous new method to check for political bias.
Published recently in the journal Public Choice, the findings show that ChatGPT’s responses favor the Democrats in the US, the Labour Party in the UK, and in Brazil President Lula da Silva of the Workers’ Party.
Previous Concerns and Importance of Neutrality
Concerns of an inbuilt political bias in ChatGPT have been raised previously but this is the first large-scale study using a consistent, evidenced-based analysis.
Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.
“The presence of political bias can influence user views and has potential implications for political and electoral processes.
“Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”
Methodology Employed
The researchers developed an innovative new method to test for ChatGPT’s political neutrality.
The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.
The responses were then compared with the platform’s default answers to the same set of questions – allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance.
To overcome difficulties caused by the inherent randomness of ‘large language models’ that power AI platforms such as ChatGPT, each question was asked 100 times, and the different responses were collected. These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.
“We created this procedure because conducting a single round of testing is not enough,” said co-author Victor Rodrigues. “Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”
A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions. In a ‘placebo test,’ it was asked politically-neutral questions. And in a ‘profession-politics alignment test’ it was asked to impersonate different types of professionals.
Goals and Implications
“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.
The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby “democratizing oversight,” said Dr. Motoki. As well as checking for political bias, the tool can be used to measure other types of biases in ChatGPT’s responses.
Potential Bias Sources
While the research project did not set out to determine the reasons for the political bias, the findings did point toward two potential sources.
The first was the training dataset – which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.
Reference: “More Human than Human: Measuring ChatGPT Political Bias” by Fabio Motoki, Valdemar Pinho Neto and Victor Rodrigues, 17 August 2023, Public Choice.
DOI: 10.1007/s11127-023-01097-2
The research was undertaken by Dr Fabio Motoki (Norwich Business School, University of East Anglia), Dr. Valdemar Pinho Neto (EPGE Brazilian School of Economics and Finance – FGV EPGE, and Center for Empirical Studies in Economics – FGV CESE), and Victor Rodrigues (Nova Educação).
This publication is based on research carried out in Spring 2023 using version 3.5 of ChatGPT and questions devised by The Political Compass.
A study by the University of East Anglia reveals a significant left-wing bias in the AI platform ChatGPT. The study highlights the importance of neutrality in AI systems to prevent potential influence on user perspectives and political dynamics.
A research study identifies a significant left-wing bias in the AI platform ChatGPT, leaning towards US Democrats, the UK’s Labour Party, and Brazil’s President Lula da Silva.
The artificial intelligence platform ChatGPT shows a significant and systemic left-wing bias, according to a new study by the University of East Anglia (UEA).
The team of researchers in the UK and Brazil developed a rigorous new method to check for political bias.
Published recently in the journal Public Choice, the findings show that ChatGPT’s responses favor the Democrats in the US, the Labour Party in the UK, and in Brazil President Lula da Silva of the Workers’ Party.
Previous Concerns and Importance of Neutrality
Concerns of an inbuilt political bias in ChatGPT have been raised previously but this is the first large-scale study using a consistent, evidenced-based analysis.
Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.
“The presence of political bias can influence user views and has potential implications for political and electoral processes.
“Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”
Methodology Employed
The researchers developed an innovative new method to test for ChatGPT’s political neutrality.
The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.
The responses were then compared with the platform’s default answers to the same set of questions – allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance.
To overcome difficulties caused by the inherent randomness of ‘large language models’ that power AI platforms such as ChatGPT, each question was asked 100 times, and the different responses were collected. These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.
“We created this procedure because conducting a single round of testing is not enough,” said co-author Victor Rodrigues. “Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”
A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions. In a ‘placebo test,’ it was asked politically-neutral questions. And in a ‘profession-politics alignment test’ it was asked to impersonate different types of professionals.
Goals and Implications
“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.
The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby “democratizing oversight,” said Dr. Motoki. As well as checking for political bias, the tool can be used to measure other types of biases in ChatGPT’s responses.
Potential Bias Sources
While the research project did not set out to determine the reasons for the political bias, the findings did point toward two potential sources.
The first was the training dataset – which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.
Reference: “More Human than Human: Measuring ChatGPT Political Bias” by Fabio Motoki, Valdemar Pinho Neto and Victor Rodrigues, 17 August 2023, Public Choice.
DOI: 10.1007/s11127-023-01097-2
The research was undertaken by Dr Fabio Motoki (Norwich Business School, University of East Anglia), Dr. Valdemar Pinho Neto (EPGE Brazilian School of Economics and Finance – FGV EPGE, and Center for Empirical Studies in Economics – FGV CESE), and Victor Rodrigues (Nova Educação).
This publication is based on research carried out in Spring 2023 using version 3.5 of ChatGPT and questions devised by The Political Compass.
Technology
AI vs MD: ChatGPT Outperforms Physicians in Providing High-Quality, Empathetic Healthcare Advice
The Rise of Artificial Intelligence: ChatGPT’s Stunning Results on the US Medical Licensing Exam
ChatGPT Generative AI: USC Experts With Key Information You Should Know
Virtual AI Radiologist: ChatGPT Passes Radiology Board Exam
Humans Reign Supreme: ChatGPT Falls Short on Accounting Exams
Illuminating the Money Trail: MIT Political Scientist Shines a Bright Light on the Dark Art of Political Lobbying
The Future of Medicine? ChatGPT Shows “Impressive” Accuracy in Clinical Decision Making
New Study: ChatGPT Outperforms University Students in Writing
You mean “Liberal?” Reality is quite liberal.
Reality is what it is. The interpretation of reality by humans is affected by their political viewpoints of how they think reality should be.
Email address is optional. If provided, your email will not be published or shared.
SciTechDaily: Home of the best science and technology news since 1998. Keep up with the latest scitech news via email or social media.
> Subscribe Free to Email Digest
September 10, 2023
Groundbreaking Quantum Leap: Physicists Turn Schrödinger’s Cat on Its Head
Researchers from the University of Warsaw’s Faculty of Physics, in collaboration with experts from the QOT Centre for Quantum Optical Technologies, have pioneered an innovative…
Researchers from the University of Warsaw’s Faculty of Physics, in collaboration with experts from the QOT Centre for Quantum Optical Technologies, have pioneered an innovative…
September 10, 2023
It’s Biology, Not Laziness: Sleep-Wake Therapy Gives New Hope for Teens With Depression
September 10, 2023
Unlocking the Secrets of Youth: Scientists Identify Blood Factor That Can Turn Back Time in the Aging Brain
September 10, 2023
Searching for Signs of Intelligent Life in the Universe: Technosignatures
September 10, 2023
Scientists Discover Amino Acid Essential for Life in Interstellar Space
September 10, 2023
Sustainable Skies: NASA and Boeing Unveil the X-66A Aircraft
September 9, 2023
New Research Reveals That Lonely People Process the World Differently
September 9, 2023
Unlocking the Secrets of Microbial Dark Matter: The Enigmatic World of Patescibacteria
Copyright © 1998 – 2023 SciTechDaily. All Rights Reserved.