AI Health Chatbots: Risks, Inaccuracies & Why Experts Are Worried

In January, OpenAI launched ChatGPT Health, a new version of its chatbot designed to analyze users’ medical records, wellness apps and wearable device data. The release followed reports of hundreds of millions of people already seeking health advice from general-purpose chatbots like ChatGPT and Claude, according to the Associated Press.

OpenAI says the new platform, built with input from over 260 physicians across 60 countries and specialties, offers a more cautious and limited approach to health-related queries than its standard chatbot. The company reviewed health-related model responses more than 600,000 times during development, aiming to reduce the risk of inaccurate or harmful advice. ChatGPT Health is currently available via a waiting list.

The introduction of specialized health chatbots comes amid growing concerns about the reliability of AI-driven medical advice. Experts caution that while these tools can summarize complex information and help patients prepare for doctor’s visits, they are not substitutes for professional medical care. Anthropic, a competitor to OpenAI, offers similar health-focused features within its Claude chatbot for some users.

According to a report in CNET, more than 5% of all ChatGPT messages globally relate to healthcare, with over 40 million weekly active users posing health-related questions. This surge in demand prompted OpenAI to create a dedicated space within ChatGPT for health-related interactions. The platform is designed to help users understand medical information and prepare for conversations with clinicians, but explicitly avoids offering diagnoses or treatment recommendations.

Recent anecdotes highlight both the potential benefits and risks of relying on AI for medical guidance. Bethany Crystal, a New York consultant, recounted to NPR how ChatGPT prompted her to seek immediate emergency care for a rare autoimmune disorder, potentially saving her life. However, experts warn that AI platforms can “hallucinate” or provide incorrect information, and may not accurately assess the severity of medical emergencies.

The Washington Times reported that AI health chatbots are booming, but experts warn they’re no substitute for real care. The tools are intended to augment, not replace, the expertise of human healthcare professionals. The potential for misdiagnosis or delayed treatment remains a significant concern.

OpenAI’s ChatGPT Health is not a standalone application but a dedicated tab within the existing ChatGPT interface, available on both web and mobile platforms. The company emphasizes that the platform is not intended to diagnose or treat medical conditions, but rather to empower patients with information and facilitate more informed discussions with their doctors.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.