Mind Launches AI & Mental Health Inquiry After Google’s ‘Dangerous’ Advice

by Rachel Kim – Technology Editor

A major inquiry into the impact of artificial intelligence on mental health is being launched by the UK mental health charity Mind, following a Guardian investigation that revealed Google’s AI Overviews were providing “very dangerous” medical advice. The year-long commission will assess the risks and necessary safeguards as AI becomes increasingly prevalent in the lives of those experiencing mental health issues.

The inquiry, described as the first of its kind globally, will convene leading doctors, mental health professionals, individuals with lived experience, healthcare providers, policymakers, and technology companies. Mind aims to establish a safer digital mental health environment through stronger regulation and standardized safeguards, according to the charity.

The Guardian’s reporting detailed instances where Google’s AI Overviews, which generate summaries appearing above traditional search results and are viewed by an estimated 2 billion people monthly, delivered false and misleading health information. Whereas Google has since removed AI Overviews for some medical searches, Mind CEO Dr. Sarah Hughes stated that “dangerously incorrect” mental health advice remains accessible to the public, potentially endangering lives.

Hughes emphasized the potential benefits of AI in improving mental healthcare access and strengthening public services, but cautioned that these advantages are contingent upon responsible development and deployment. “We believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support, and strengthen public services. But that potential will only be realised if It’s developed and deployed responsibly, with safeguards proportionate to the risks,” she said. “The issues exposed by the Guardian’s reporting are among the reasons we’re launching Mind’s commission on AI and mental health, to examine the risks, opportunities and safeguards needed as AI becomes more deeply embedded in everyday life.”

The investigation highlighted inaccurate AI-generated advice across a range of health concerns, including cancer, liver disease, women’s health, and mental health conditions. Experts characterized some AI Overviews related to psychosis and eating disorders as “incorrect, harmful or could lead people to avoid seeking assist.” Google, in response, has maintained that its AI Overviews are “helpful” and “reliable,” but acknowledged it is working to address inaccuracies.

Rosie Weatherley, information content manager at Mind, noted a shift in the quality of information available through Google searches. While previous searches typically directed users to credible health websites offering detailed information and diverse perspectives, AI Overviews present a “clinical-sounding summary” that creates a false sense of certainty. “They give the user more of one form of clarity (brevity and plain English), while giving them less of another form of clarity (security in the source of the information, and how much to trust it). It’s a very seductive swap, but not a responsible one,” Weatherley said.

Hughes further warned that vulnerable individuals are receiving “dangerously incorrect guidance on mental health,” including advice that could discourage treatment, reinforce stigma, or, in severe cases, threaten lives. “People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence,” she stated.

A Google spokesperson said the company “invests significantly in the quality of AI Overviews, particularly for topics like health” and displays crisis hotlines when systems detect potential distress. The spokesperson declined to comment on the accuracy of specific examples referenced by the Guardian without review.

The Mind commission will gather evidence on the intersection of AI and mental health, providing a platform for individuals with lived experience to share their perspectives. The commission’s findings are expected to inform recommendations for a safer digital mental health ecosystem.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.