AI chatbots caught amplifying sanctioned Russian propaganda – Computing UK

New research reveals that OpenAI’s ChatGPT, Google’s Gemini, Chinese-developed DeepSeek, and Elon Musk’s Grok have been serving users pro-Kremlin narratives and quoting sanctioned Russian state media in their responses about the invasion of Ukraine.
Researchers at the Institute of Strategic Dialogue (ISD) say Russia has exploited “data voids” to sneak falsehoods into AI-generated content.
Data voids are moments where fresh, reliable information is scarce in search results.
Almost one in five AI responses tested cited sources connected to Russia’s state communications or intelligence-linked disinformation networks. These sources included Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, and the Strategic Culture Foundation.
Some chatbots even leaned on Kremlin-friendly influencers to fill out their answers.
“It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,” Pablo Maristany de las Casas, the ISD analyst who led the study told Wired.
The researchers tested 300 questions ranging from neutral to “malicious” – the latter demanding answers that supported conspiracy-like assertions. The topics cut across NATO, Ukrainian refugees, military recruitment, peace negotiations, and war crimes.
Responses were collected in English, French, Spanish, German, and Italian, using fresh user accounts every time. The results showed that neutral queries triggered Russian state-linked citations in just over 10% of responses, and biased queries increased that rate to 18%.
ChatGPT topped the leaderboard for most frequently citing Russian sources and showed the strongest sensitivity to biased phrasing.
Grok tended to amplify Kremlin narratives circulating on social media. DeepSeek sometimes flooded users with Kremlin-attributed content.
Gemini put up safety warnings and delivered comparatively fewer problematic answers, though it wasn’t spotless.
The EU has sanctioned at least 27 Russian media entities since the February 2022 invasion, accusing them of distorting facts and seeking to destabilise Europe. Yet those exact outlets keep popping up inside chatbots millions of Europeans rely on.
ChatGPT alone reached an estimated 120.4 million average monthly users in the EU from April through September 2025. That figure potentially qualifies it for “Very Large Online Platform” status.
OpenAI spokesperson Kate Waters says the company already enforces guardrails to avoid facilitating disinformation, including “content linked to state-backed actors.” She stresses that the report appears to focus on the system’s search functionality, rather than evidence of model manipulation.
In an emailed statement to Wired, xAI said: “Legacy Media Lies.”
A spokesperson for the Russian Embassy in London denounced the criticism, saying Moscow “opposes any attempts to censor or restrict content on political grounds.”
The spokesperson added that attempts to repress Russian outlets rob people of “independent opinions” and betray the West’s proclaimed values of free expression and pluralism.
The ISD’s findings arrive amid multiple reports of a widespread Russian operation known as “Pravda” that has dumped millions of items of propaganda online with the goal of “poisoning” large language models (LLMs) from within.
Maristany de las Casas argues solutions must move beyond pure removal. He calls for better context around untrustworthy sources: clear identifiers when media outlets are sanctioned or linked to intelligence agencies.
“It could be providing users with further context,” he says.
“It’s not only an issue of removal, it’s an issue of contextualizing further to help the user understand the sources they’re consuming, especially if these sources are appearing amongst trusted, verified sources.”

source

Jesse
https://playwithchatgtp.com