Top AI chatbots can be easily manipulated to spread health disinformation – Report – FactCheckHub


Please note that the FactCheckHub has not obtained any right from the authors or copyright owners of all third-party contents from embedded links and shall incur no liability in this regard.
Please note that the FactCheckHub has not obtained any right from the authors or copyright owners of all third-party contents from embedded links and shall incur no liability in this regard.
Please note that the FactCheckHub has not obtained any right from the authors or copyright owners of all third-party contents from embedded links and shall incur no liability in this regard.
Please note that the FactCheckHub has not obtained any right from the authors or copyright owners of all third-party contents from embedded links and shall incur no liability in this regard.
Share
A new international study has shown that it is easy to manipulate widely-used AI chatbots to deliver false and potentially harmful health information.
In the study published in the Annals of Internal Medicine, researchers evaluated the five foundational and most advanced AI systems developed by OpenAI, Google, Anthropic, Meta and X Corp to determine whether they could be programmed to operate as health disinformation chatbots.
The study which was conducted by researchers from the University of South Australia, Flinders University, University College London, Warsaw University of Technology and Harvard Medical School demonstrated that Large Language Models (LLMs), including some of the most advanced AI tools on the market, can be reprogrammed to spread convincing but entirely fabricated medical advice.
Using instructions available only to developers, the researchers programmed each AI system – designed to operate as chatbots when embedded in web pages – to produce incorrect responses to health queries and include fabricated references from highly reputable sources to sound more authoritative and credible.
The ‘chatbots’ were then asked a series of health-related questions.
According to UniSA researcher, Natansh Modi, the results were disconcerting.
“In total, 88% of all responses were false,” Dr Modi says, “and yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate.
“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility.”
Out of the five chatbots that were evaluated, four generated disinformation in 100% of their responses, while the fifth generated disinformation in 40% of its responses, showing some degree of robustness.
The team also explored the OpenAI GPT Store, a publicly accessible platform that allows users to easily create and share customised ChatGPT apps, to assess the ease with which the public could create disinformation tools.
“We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation.
“Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers’ tools, but also tools available to the public.”
Modi said that these findings reveal a significant and previously under-explored risk in the health sector.
“Artificial intelligence is now deeply embedded in the way health information is accessed and delivered,” he said. “Millions of people are turning to AI tools for guidance on health-related questions.
He warned that if the systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before.
“This is not a future risk. It is already possible, and it is already happening.”
He added: “Without immediate action, these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns.”
Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via nyahaya@icirnigeria.org and @NurudeenAkewus1 via Twitter.




Useful Links
Fact-Checks
Contact
info@factcheckhub.com
+234 903 078 5265
Follow us
Please note that the FactCheckHub has not obtained any right from the authors or copyright owners of all third-party contents from embedded links and shall incur no liability in this regard.

Copyright © 2025 FactCheckHub. All rights reserved..

source

Jesse
https://playwithchatgtp.com