ChatGPT Swings Left-Wing: The Political Bias Of The Chatbot – TechRound

Alice Busvine August 17, 2023
Since its release, ChatGPT hasn’t stopped capturing the globe’s attention for one reason or another. Now, its political bias is being pulled into the debate after findings have been released that the popular artificial intelligence chatbot has a systemic left-wing bias.
According to a newly published study by the University of East Anglia, ChatGPT has a significant prejudice towards the right-wing, instead favouring the Labour Party and President Biden’s Democrats in the US. But what could this mean for the political sway of our nations?
 
The findings by UK researchers aren’t the first time the political bias of ChatGPT has been pulled into question. Concerns about an inbuilt political in the AI chatbot have already been raised in the past, notably by Tesla and Twitter tycoon Elon Musk.
Nevertheless, despite past accusations, academics at the University of East Anglia say their work was the first large-scale study to find proof of any inbuilt political favouritism.
Lead Author Dr Fabio Motoki warned that, given the increasing use of OpenAI’s platform by the public, the findings could have implications for upcoming elections in both the UK and the US.
“Any bias in a platform like this is a concern,” he stated. “If the bias were to the right, we should be equally concerned.
“Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’”
“Just as the media, the internet, and social media can influence the public, this could be very harmful.”
 
To achieve the recent breakthrough, just how did the university researchers discover an inbuilt left-wing bias in ChatGPT?
When given prompts by the user, the job of the AI chatbot will generate responses. In the recent test, ChatGPT was asked a range of ideological questions as well as to impersonate people from across the political spectrum.
This triggered responses that ranged from neutral to radical, with each “individual” asked whether they agreed, strongly agreed, disagreed, or strongly disagreed with a given statement.
These responses were then compared to default answers it gave to the same set of queries. This allowed researchers to compare how much they were associated with a particular political stance.
The questions were asked 100 times to allow for the potential randomness of the chatbot, then these responses were further analysed for signs of political bias.
By feeding ChatGPT an enormous amount of text data from across the internet and beyond, Dr Motoki says researchers can simulate a survey of a real human population, whose answers may also differ depending on when they’re asked.
 
So, what’s causing OpenAIs chatbot to have a left-wing bias – or a political bias at all?
Researchers have explained that when you feed ChatGPT with a significant amount of data, this database may already have biases within it. This can then proceed to influence the chatbot’s responses.
Another potential reason for its political prejudice could be the algorithm, researchers have said. The algorithm is the way in which the chatbot is trained to respond, so this could amplify any existing biases in the data it’s been fed.
 
A political bias inbuilt into a chatbot so widely used by the public could be dangerous, and may even affect the political sway of nations across the globe.
Two major elections are coming up next year in both the UK and the US, making it more important than ever to prevent misinformation from seeping into public hands.
“I see this as a threat to democracy”, Dame Wendy Hall stated in reference to the political bias of the widely used AI chatbot.
“We’ve got to help people understand where their getting the messages from and how to check our sources”
“We all have to understand that, and not just believe everything because it’s appeared on the internet.”
Dame Hall’s notion displays the widely held and increasing worries that AI is making it too easy for the public to be misinformed, with AI-created images and text becoming harder and harder to detect.
In response to this concern, the team at the University of East Anglia will be releasing its analysis method as a free tool for people to check for biases in ChatGPT’s responses.
Dr Pinho Neto, another co-author, said: “We hope that our method will aid scrutiny and regulation of these rapidly developing technologies.”
The findings have been published in the journal Public Choice, and one can only hope they may be a step in the right direction to protect democracy and the public from becoming, as Dame Hall phrased, “slaves to AI’s master”.
Lara Dolden July 15, 2021
TechRound Team April 13, 2022
Dana Leigh May 17, 2023
 
[email protected]

TechRound

83 Charlotte Street
London, W1T 4PR
 
Download our media pack
We are delighted to announce that award submissions for 2022 are open. We are running the Techround 100, BAME 50 and the 29 under 29 competitions.
Enter now and get your business in front of an influential audience of over 300,000 monthly visitors.
No thanks, I’m not interested!

source

Jesse
https://playwithchatgtp.com