Mind, the leading mental health charity in England and Wales, has launched a year-long inquiry into the impact of artificial intelligence on mental health, prompted by concerns raised following a Guardian investigation into Google’s AI Overviews. The inquiry will examine how the AI-generated summaries, displayed prominently above traditional search results and accessed by an estimated 2 billion users monthly, are providing potentially “very dangerous” mental health advice.
Rosie Weatherley, information content manager at Mind, detailed the risks posed by these AI summaries, noting a significant shift in the quality of information available through Google’s search function. “Over three decades, Google designed and delivered a search engine where credible and accessible health content could rise to the top of the results,” Weatherley stated. “Searching online wasn’t perfect, but it usually worked well. Users had a good chance of clicking through to a credible health website that answered their query.”
Weatherley argues that AI Overviews have replaced this system with “a clinical-sounding summary that gives an illusion of definitiveness,” a change she describes as “a very seductive swap, but not a responsible one.” She explained that the summaries often prematurely conclude the user’s information-seeking process, leaving them with incomplete or inaccurate information.
Mind’s internal testing revealed a series of alarming inaccuracies within the AI Overviews. According to Weatherley, a team of mental health experts, during a 20-minute test using search queries common among individuals experiencing mental health challenges, encountered responses asserting that starvation is a healthy practice. One individual received the suggestion that mental health problems stem from chemical imbalances in the brain, while another was incorrectly informed that a perceived stalker was, in fact, a real threat. A further test suggested that 60% of mental health benefit claims are fraudulent.
“In each of these examples we are seeing how AI Overviews are flattening information about highly sensitive and nuanced areas into neat answers,” Weatherley said. “And when you take out important context and nuance and present it in the way AI Overviews do, almost anything can seem plausible. This process is especially harmful for people who are likely to be in some level of distress.”
Weatherley criticized Google’s reactive approach to addressing inaccuracies, characterizing it as a “whack-a-mole” style of problem-solving that is insufficient given the scale of the platform and the potential harm caused. She emphasized that a company of Google’s size and profitability should dedicate more resources to ensuring the accuracy of the information provided through its AI tools.
The charity similarly noted that while Google has implemented measures to limit access to information about harmful acts, such as suicide methods, the risk of encountering inaccurate or misleading information remains significant for individuals actively searching for help. Weatherley highlighted that even searches for crisis information can yield “haphazardly collaged” and contradictory advice within the AI Overview.
“Perhaps AI has enormous potential to improve lives, but right now, the risks are really worrying,” Weatherley concluded. “Google will only protect you from the potential faults of AI Overviews when it thinks you’re in acute distress. People need and deserve access to constructive, empathetic, careful and nuanced information at all times.”