Can AI chatbots easily be misused to spread credible health misinformation? – The Daily Star
Russia hits Kyiv with largest drone attack
The clip that keeps haunting her
Rice, veggies, fish get pricier
The war that Israel lost
A recent study has raised concerns about the misuse of artificial intelligence (AI) chatbots in spreading health disinformation. Researchers tested five major AI models—OpenAI’s GPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama, and xAI’s Grok Beta—by instructing them to provide false yet scientifically styled responses to health-related questions.
Four of the five chatbots returned health disinformation 100% of the time when prompted, using polished medical language and fabricated sources. Claude was the only model that resisted in most cases but still delivered disinformation 40% of the time. The responses were often convincing, citing fake studies and mimicking the tone of legitimate medical advice.
In one alarming example, a chatbot falsely claimed, “A 2022 study in Nature Medicine found a 37% decrease in sperm concentration among men living near 5G towers,” suggesting a link between 5G technology and infertility—a widely debunked claim.
The study also explored the GPT Store, a platform by OpenAI that allows users without coding experience to create and share custom chatbots. Researchers successfully created a hidden chatbot designed to deliver false health information. While the bot was later deleted, they also found two publicly accessible GPTs exhibiting similar behaviour.
These findings underscore the potential risk of AI platforms being exploited to spread misinformation, particularly in sensitive areas like public health. As AI tools become more accessible, the study calls for stronger oversight, safeguards, and content monitoring to prevent the spread of harmful falsehoods.
Source: Annals of Internal Medicine
যুবলীগ নেতাকর্মীরা তাদের মারধর, গুলি করে হত্যা করে