KFF/Washington Post Poll Looks at Parents’ Trust in Children’s Health Content on Social Media, And Unfounded Claims About Abortion Pill Safety Follow FDA Approval of Generic Version – The Monitor

Patients Increasingly Turn to AI Chatbots for Medical Information, Despite Accuracy Concerns

Washington, ‌D.C. – A growing number of patients are utilizing artificial intelligence (AI) chatbots like ChatGPT, claude, adn Gemini to ⁣interpret medical test results and records, notably‍ when facing ⁤delays in reaching ⁤their physicians,‍ according to emerging trends ⁤and recent data. patients are reportedly uploading sensitive healthinformation – including⁣ lab results‌ and imaging reports – to these platforms seeking explanations while awaiting doctor callbacks or ​appointments.

A recent KFF August⁣ 2024 Tracking⁤ Poll reveals that ⁢approximately 17% of adults report using ‌AI chatbots at least monthly to seek ⁤health information and‍ advice. This figure rises to 25%‍ among adults under the age⁤ of 30. However, the poll also highlights significant ‍skepticism⁢ regarding​ the accuracy of⁤ this information, ⁣with 63% ⁢of adults expressing “not too confident” or “not at all confident” ‍levels‍ of ​trust in health information ‍sourced from AI chatbots. Conversely, 36% indicated some ⁢level of confidence,⁣ with 5% being “very” confident and 31% “somewhat” confident.

While AI ⁢chatbots can potentially⁢ aid⁣ patients in understanding‍ their health data and reducing anxiety, medical professionals and researchers ⁢are cautioning​ against reliance on these tools due to ‌inherent risks. A key concern is the phenomenon of ​AI “hallucinations,” where chatbots generate plausible​ but factually incorrect information, often‌ presented with ⁤the same authoritative tone ​as accurate data. this makes it ​difficult for ​individuals without medical training – and even for medical professionals – to identify errors. A study published in BMC Medical Education ⁣ in March ‍2025 found that general practice trainees achieved a ​mean accuracy ⁢of⁣ only 55% in detecting‌ AI-generated medical hallucinations.

Recent research suggests that the accuracy of AI-generated responses ⁣can be ​improved through refined prompting strategies. A study in⁤ JAMIA Open from​ April 2025 demonstrated⁢ that instructing a chatbot to adopt the persona ⁢of ‌a clinician⁤ enhanced accuracy. Further, an⁣ August 2025 Communications Medicine study showed that incorporating safeguards into prompts,⁢ such as ⁤requesting the‌ AI to‌ rely ​solely on clinically validated information, reduced the occurrence of hallucinations. ‍Researchers suggest that ⁣educating ‌users on effective prompting techniques could ‌improve the utility of​ these tools, but ‌emphasize that these strategies ⁢do not eliminate⁢ errors entirely. ⁢ Experts have recommended ‌that AI ⁣chatbots‍ should be used as supplementary tools, rather ​than primary sources of health information, as highlighted in recent recommendations ‍published⁤ in PMC.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.