Risks Associated With Unregulated Use Of AI Chatbots In Healthcare | TheHealthSite.com – TheHealthSite
Written by Longjam Dineshwori |Updated : July 4, 2023 8:01 PM IST
Open AI’s ChatGPT, Google’s MedPaLM, Meta’s LLaMA, these large language models (LLMs) have some great use cases. These chatbots also have great medical potential, but their unregulated use in healthcare can be dangerous. How to regulate Large Language Models (LLMs) in healthcare is also one of the most pressing global issues today. Let’s delve into the potential benefits and risks associated with the use of AI chatbots as medical devices.
Large Language Models are capable of generating highly convincing human-like responses and engaging in interactive conversations. But they often generate wrong or provide inappropriate statements. Wrong responses to medical questions can lead to dangerous consequences. This is the biggest fear experts have regarding the use of AI chatbots as medical devices.
Prof. Stephen Gilbert, Professor for Medical Device Regulatory Science at Else Kr ner Fresenius Center for Digital Health at Technische Universit t Dresden (TU Dresden), is not in favour of using current LLM-chatbots in healthcare.
Also Read
More News
Writing in an article, Prof. Gilbert stated these chatbots are unsafe tools and stressed the need to develop new frameworks that ensure patient safety.
Do you research your symptoms on internet before seeking medical advice? You’re not alone. Today, search engines play a key role in people’s decision-making process.
LLM-chatbots are known for their remarkable conversational skills and highly convincing responses, and experts fear that the integration of LLM-chatbots into search engines may increase users’ confidence and dependency in the information given by a chatbot.
In the article, Prof. Gilbert cited that LLMs can provide extremely dangerous information when it comes to medical questions.
The article further mentioned there had been unethical use of chat interfaced LLMs in ‘experiments’ on patients without consent. It highlighted the need for regulatory control on medical LLM use.
According to Prof. Gilbert, LLM-chatbots developed today do not meet key principles for AI in healthcare, such as bias control, explainability, systems of oversight, validation and transparency.
The article also talked about how developers can design LLM-based tools that could be approved as medical devices, and creation of new frameworks that preserve patient safety.
For medical use, the accuracy of chatbots must be improved, their safety and clinical efficacy must be demonstrated and approved by regulators, added Prof. Gilbert.
Follow us on
Don’t Miss Out on the Latest Updates.
Subscribe to Our Newsletter Today!
Enroll for our free updates
Thank You
Thanks for update subscribe information
Black Jamun
Blue Light Exposure
News
Hair Fall
Benefits Of Viparita Karni
Clean Eating
Bad Mental Health
Anemia In Women
Cold Drink Side Effects
Food For Healthy Nervous System
Arjun Tree Benefits
Hemp Benefits
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. Cookie Policy.