Home » Health » AI in Healthcare: Bias, Privacy, and Predicting Health Outcomes

AI in Healthcare: Bias, Privacy, and Predicting Health Outcomes

by Dr. Michael Lee – Health Editor

## AI Medical Tools Show ​Bias, raising Concerns Over Equitable Healthcare

Artificial intelligence is rapidly being integrated into healthcare, offering potential benefits in diagnosis and prediction, but recent scrutiny reveals a concerning trend: AI medical tools may downplay symptoms in women and ​ethnic minorities. Several projects utilizing large datasets are facing questions about bias and data privacy.google​ acknowledges ‌teh ‌seriousness of model bias and is actively developing privacy techniques to sanitize sensitive datasets and implement safeguards against discrimination. Researchers suggest a proactive approach to mitigating bias involves identifying unsuitable datasets *before* training and prioritizing diverse, representative health data.

Several AI models are already in development or use. open Evidence, utilized by 400,000 US doctors, summarizes patient histories and​ retrieves data, relying on medical journals, FDA labels,‌ health guidelines, and expert reviews. Crucially, every AI-generated output is‍ supported by a citation to ​its source.

In the UK, researchers at University College London and King’s College London partnered with the National Health Service (NHS) to create Foresight,‍ a generative AI model trained on ⁢anonymized data from 57 million patients, including⁤ records of hospital admissions and COVID-19 vaccinations. Foresight aims to predict potential health outcomes like hospitalization or heart attacks. “Working with national-scale data allows us to represent the full kind of kaleidoscopic state of England⁣ in terms of demographics and diseases,” explained Chris Tomlinson, honorary senior research fellow at UCL and lead researcher ⁣for ‍the Foresight team.He believes this approach offers a stronger foundation than more generalized datasets.

European scientists ⁤have also⁣ developed ⁢Delphi-2M, an AI‌ model predicting future disease⁣ susceptibility based on anonymized records from 400,000 participants in the UK Biobank.

Though,the use of large-scale patient data raises significant privacy concerns.The NHS Foresight project was temporarily paused in⁢ June to allow the UK’s Information Commissioner’s Office to investigate a data protection complaint‌ filed by the British Medical Association and the ⁢Royal College‌ of General Practitioners, specifically regarding the use of sensitive health data​ during model training.

Beyond privacy, experts caution that AI systems are prone to “hallucinations”-fabricating answers-a potentially dangerous flaw in a ⁤medical context.Despite ⁤these challenges,⁤ MIT’s Ghassemi emphasizes the positive impact of AI on healthcare. “My hope is that we will start to refocus models in health on addressing crucial health gaps, not adding an extra percent to task performance ‌that‌ the doctors are honestly pretty ‍good at anyway.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.