As the healthcare system continues to face overwhelming demand, long waitlists, and rising costs, many individuals are turning to AI chatbots like ChatGPT for self-diagnosis and health advice. A recent survey reveals that about one in six adults in the United States uses AI chatbots monthly to seek health-related guidance.
However, a study led by Oxford University warns that over-relying on chatbots could be risky, particularly because users struggle to know what information to provide to get the best advice.
“This research highlights the barriers to effective communication,” said Dr. Adam Mahdi, Director of Graduate Studies at the Oxford Internet Institute and co-author of the study. “Those who used chatbots did not make better decisions than those relying on traditional methods such as online searches or personal judgment.”
Study Methodology
In the study, around 1,300 participants in the UK were asked to analyze a set of medical scenarios written by doctors. The task was to identify potential health issues in these scenarios and then use both AI chatbots and their own methods to determine possible actions (such as visiting a doctor or going to the hospital).
Participants used ChatGPT’s default AI model, GPT-4o, along with Cohere’s Command R+ and Meta’s Llama 3, the AI that powers Meta’s assistant. The authors found that chatbots not only reduced the likelihood of participants identifying relevant health issues but also made them more likely to underestimate the severity of any diagnosed conditions.
Dr. Mahdi noted that participants often missed key details when querying the chatbot, or received responses that were difficult to interpret.
“Often, they got both good and bad advice in the same reply,” Mahdi added. “The current evaluation methods for chatbots do not reflect the complexity of interacting with human users.”
AI in Healthcare
This finding comes at a time when tech companies are heavily promoting AI as a tool to improve health. Apple, for example, is reportedly developing an AI tool to provide advice on exercise, diet, and sleep. Amazon is exploring AI approaches to analyze medical databases and identify “social determinants of health.” Microsoft is building AI systems to categorize messages from patients to healthcare providers.
However, as previously reported by TechCrunch, opinions remain divided among professionals and patients about whether AI is ready to be applied to high-risk health areas. The American Medical Association (AMA) has advised doctors not to use chatbots like ChatGPT to assist with clinical decision-making, and major AI companies, including OpenAI, have warned against making diagnoses based on chatbot outputs.
Expert Opinion
“We recommend relying on trusted information sources for healthcare decisions,” said Dr. Mahdi. “Current methods of evaluating chatbots do not reflect the complexities of interacting with human users. Just like new drugs undergo clinical trials, chatbot systems should also be tested in real-world conditions before deployment.”
As the integration of AI into healthcare continues to grow, it is crucial to address these limitations to ensure that users receive safe, accurate, and effective advice.
Related Topics