Almost 40 Percent of Americans Trust Medical Advice From AI Chatbots – rollingstone.com
By Miles Klee
As AI technology becomes more integrated into every aspect of daily life, people have grown increasingly comfortable asking large language model chatbots for help with all kinds of questions — including, it seems, in matters of life and death.
In a new survey of 2,000 Americans commissioned from research consultancy Censuswide by the in-home IV therapy and medical services company Drip Hydration, 39 percent of respondents said they trusted tools such as ChatGPT in navigating healthcare decisions. That sentiment outpaced neutral feelings about whether LLMs can be helpful with medical queries (expressed by 31 percent of those surveyed) as well as outright distrust of chatbots in these matters (30 percent).
This data comes just weeks after researchers published a study showing that AI models from companies including OpenAI, Anthropic, DeepSeek, Google, and xAI have largely stopped including disclaimers about following their medical advice, which were once standard across the industry. Whereas 26 percent of chatbot answers to health queries back in 2022 contained some kind of warning about the LLM not being a doctor, fewer than one percent of responses in 2025 had such a reminder. In cases where users uploaded medical images for the bots to analyze, just over one percent of responses now include a warning about following medical advice from the bot — down from about 20 percent three years ago.
“There are a lot of headlines claiming AI is better than physicians,” Roxana Daneshjou, an assistant professor of biomedical data science at Stanford University and a co-author of that paper, told the MIT Technology Review about her findings. “Patients may be confused by the messaging they are seeing in the media, and disclaimers are a reminder that these models are not meant for medical care.”
This lack of context in chat logs about everything from health symptoms to potential diagnoses and pharmaceutical side effects may be contributing to trust in chatbots as the virtual equivalent of qualified medical experts. But the Censuswide survey also produced indications that dissatisfaction or concerns about the state of healthcare in U.S. could be a factor driving reliance on AI for input on treatment. The data showed that 31 percent of Americans — and 37 percent of women — are using chatbots to prepare questions for doctor visits, while 23 percent are looking to avoid medical expenses. (Men are somewhat more likely to give this reason, at 27 percent.) Overall, 20 percent of respondents said that they had turned to LLMs for a second opinion, while 17 percent believe they are useful for avoiding bias or unfair treatment in the healthcare system. (Ironically, AI programs themselves have been shown to conform to ethnic and gender prejudices, and will even replicate existing healthcare inequalities based on unfounded assumptions about a patient based on their race.)
Censuswide found that men are more likely to have confidence in medical advice from a bot: 48 percent said this was a reliable source of health information, compared to 31 percent of women. Middle-aged users were also more trusting, with 52 percent of adults aged 45 to 54 saying they could count on AI models for help with medical questions, while older and younger demographics were somewhat less sure of the tech’s capacity as a personal health tool.
Obviously, chatbots cannot examine an individual as a doctor would, and they can make dangerous recommendations based on inaccurate conclusions. Nevertheless, as another recent study found, people rate low-accuracy AI-generated responses about their medical situation as highly as those written by actual physicians. Participants in the study also “indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided” by chatbots, the authors wrote.
Additionally, LLMs are not compliant with the Health Insurance Portability and Accountability Act (HIPAA), and they aren’t designed to handle sensitive health data. Uploading medical images, for example, could put users at risk in case of a privacy breach.
Yet these concerns apparently haven’t dissuaded many Americans from treating ChatGPT and the like as doctors and therapists who are on call 24/7. That convenience, different obstacles to treatment in the U.S., and the default view of chatbots as authoritative experts on any subject — particularly when they fail to tell you that they aren’t — have no doubt contributed to people being misled by bots about their physical and mental wellbeing. Unless the culture normalizes a deeper skepticism toward AI as a source of infinite knowledge, this is a societal malady that may just get worse.
We want to hear it. Send us a tip using our anonymous form.
Rolling Stone is a part of Penske Media Corporation. © 2025 Rolling Stone, LLC. All rights reserved.