ChatGPT is injurious to health: Why you should not take medical advice from OpenAIs chatbot – Firstpost
ChatGPT's free edition is limited to datasets through September 2021, which could result in outdated recommendations in the quickly evolving medical profession. Reuters
Whether it be Google or ChatGPT, looking up for any illness symptoms online, can put your health at risk.
Given the popularity of the artificial intelligence chatbot developed by OpenAI, researchers have warned against using the free edition of the tool to ask questions about medications.
This is because the chatbot may provide inaccurate or incomplete responses or in some cases, no response at all.
Let’s take a closer look.
Inaccurate answers to medical questions
According to a study by Long Island University, ChatGPT provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions.
The researchers asked 39 medication-related questions from their College of Pharmacy drug information service. The AI chatbot’s answers were then compared with responses written and reviewed by trained pharmacists.
According to the study, only roughly 10 questions — almost 25 per cent of the total — had accurate answers from ChatGPT. The responses to the remaining 29 challenges were either inaccurate or incomplete, or they failed to answer the questions.
The findings were presented Tuesday at the annual meeting of the American Society for Health-Systems Pharmacists in Anaheim, California.
According to CNBC, in order to confirm the accuracy of ChatGPT’s responses, researchers asked for references. The chatbot only provided references in eight of the responses, and all of those references cited non-existent sources.
Similarly, previous studies have also shown that ChatGPT can produce convincing fakes of scientific references when asked medical inquiries, even including the names of actual authors who have published in journals.
The study drew attention to one particular case in which ChatGPT falsely claimed that there had never been an interaction between Pfizer’s Paxlovid and the blood pressure-lowering drug verapamil. In reality, patients may be at risk if these drugs are taken together since they have the potential to dangerously lower blood pressure.
Experts advise caution
“Using ChatGPT to address this question would put a patient at risk for an unwanted and preventable drug interaction,” Lead author Sara Grossman, an associate professor of pharmacy practice at LIU, wrote in an email to CNN.
Particularly, ChatGPT’s free edition is limited to datasets through September 2021, which could result in outdated recommendations in the quickly evolving medical profession.
According to the study’s findings, anyone considering using ChatGPT for drug-related information — including patients and healthcare professionals — should proceed with caution. They should speak with professionals directly for any medical advice, whether utilising the paid or free version with access to real-time data.
“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” India Today quoted her as saying.
OpenAI’s response
Responding to the study, an OpenAI spokesperson stressed that users are clearly advised against using ChatGPT’s responses as “a substitute for professional medical advice or traditional care.”
The spokesperson also shared a section of OpenAI’s usage policy, which states that the company’s “models are not fine-tuned to provide medical information,” reported CNBC.
People should never use ChatGPT to provide diagnostic or treatment services for serious medical conditions, the usage policy said.
ChatGPT enjoys global popularity
Launched in November 2022, ChatGPT is an experimental AI chatbot from OpenAI that went on to become the fastest-growing consumer application in history, with over 100 million users signing up in just two months.
However, the chatbot has also brought up issues related to misinformation, fraud, discrimination, and intellectual property along the road.
Similar cases of incorrect responses from ChatGPT have been reported in a number of studies. According to The Washington Post, the Federal Trade Commission launched an investigation into the chatbot’s accuracy and customer protections in July.
Join our Whatsapp channel to get the latest global news updates
Published on: December 11, 2023 19:01:15 IST
TAGS:
Sam Altman has been accused by several employees as a "psychologically abusive, toxic" boss, who was manipulative and would double-cross people for the most basic things. People at OpenAI suspect that this was the reason why he was fired earlier
When Sam Altman was rehired as the CEO of OpenAI, there was speculation that Microsoft would get a seat on the redrawn and reconstituted board of directors. However, Bret Taylor, the Chairman, Sam Altman and Adam D'Angelo, the three members of the new board, have refused
Google claims Gemini is the first AI model to outperform human experts in certain benchmarks involving problem-solving, math, physics, history, law, medicine and ethics. Experts say only the ‘Ultra’ version of Gemini – which will launch next year – narrowly beats ChatGPT
Network18 sites
Copyright © 2023. Firstpost – All Rights Reserved.