Chatbots for Health Advice: Study Reveals Risks
A recent Oxford-led study reveals the challenges of using AI chatbots for health advice. While the popularity of tools like ChatGPT for medical self-diagnosis is increasing due to long wait times and rising healthcare costs, the study highlights significant risks.
Approximately one in six American adults uses chatbots for health advice monthly. However, the study found a communication breakdown between users and chatbots. Participants struggled to provide sufficient information for accurate recommendations. Consequently, they made no better decisions than those using traditional methods like online searches.
Study Methodology and Findings
The study involved around 1,300 UK participants who analyzed medical scenarios. They used several AI models, including GPT-4, Cohere's Command R+, and Meta's Llama 3. The results showed chatbots made participants less likely to identify relevant health conditions and more likely to underestimate their severity.
Adam Mahdi, director of graduate studies at the Oxford Internet Institute and study co-author, explained that participants often omitted crucial details or received confusing responses. He noted,
"The responses they received frequently combined good and poor recommendations. Current evaluation methods for chatbots do not reflect the complexity of interacting with human users."
The Rise of AI in Healthcare and Associated Concerns
Despite these concerns, tech companies continue to explore AI's potential in healthcare. Apple is reportedly developing an AI tool for exercise, diet, and sleep advice. Amazon is exploring AI-driven analysis of medical databases. Microsoft is working on AI for triaging patient messages to care providers.
However, both patients and professionals express reservations about AI's readiness for high-risk health applications. The American Medical Association advises against physicians using chatbots like ChatGPT for clinical decisions. Leading AI companies also caution against using their chatbots for diagnosis.
Recommendations for Safe Use of AI Health Tools
Mahdi recommends relying on trusted information sources for healthcare decisions. He emphasizes the need for rigorous real-world testing of chatbot systems before widespread deployment, similar to clinical trials for new medications.
The study underscores the importance of cautious optimism regarding AI in healthcare. While the potential benefits are significant, ensuring patient safety and accurate information remains paramount.