The potential for artificial intelligence to play a larger role in mental healthcare is gaining traction, with a growing number of people already turning to AI chatbots like ChatGPT for guidance on mental wellbeing. Now, a novel idea is being explored: annual mental health check-ups conducted via AI, mirroring the routine physical exams many already undergo.
The concept, outlined by AI scientist Lance Eliot in a recent Forbes column, centers on the accessibility and affordability of AI. Unlike traditional therapy, AI-driven check-ups could be available 24/7, at little to no cost, and completed in a matter of minutes. This ease of access could potentially democratize mental healthcare, reaching individuals who might otherwise face barriers to treatment.
Though, the idea is not without its critics and raises significant questions about the reliability and safety of relying on AI for such a sensitive task. Concerns center on the potential for AI to misdiagnose conditions, offer inappropriate advice, or even exacerbate existing mental health issues. A recent lawsuit against OpenAI, highlighted by Eliot, underscores these risks, alleging a lack of safeguards in AI systems that could lead to harmful cognitive advisement and the potential for AI to contribute to delusional thinking.
Current large language models (LLMs), such as ChatGPT, Claude, and Gemini, are not equivalent to the capabilities of human therapists, though specialized LLMs are under development. A scoping review published in Nature in February 2025 found that while LLMs show promise in handling human-like conversations, their effectiveness in mental health care remains uncertain. The review also noted a lack of standardized evaluation methods and concerns about transparency and reproducibility due to reliance on proprietary models.
Despite these concerns, the use of AI in mental health is already widespread. According to Eliot, ChatGPT alone has over 900 million weekly active users, a significant portion of whom utilize the platform for mental health-related discussions. A scoping review of AI-driven digital interventions in mental health care, published in Healthcare (Basel) in May 2025, identified chatbots, natural language processing tools, and machine learning models as the most common AI modalities used for support, monitoring, and self-management. However, the study emphasized that these technologies are primarily used as supplementary tools rather than standalone treatments.
The idea of annual AI mental health check-ups draws a parallel to annual physicals, where mental health is often briefly addressed. A 2021 study published in Archives of Public Health found that 82% of individuals aged 60 and over, and 67.3% of those aged 18-59, reported having an annual check-up. Proponents suggest that AI check-ups could be particularly beneficial for older adults, potentially aiding in the early detection of cognitive decline and dementia.
To explore the practical application of an AI mental health check-up, Eliot tested a templated prompt on several LLMs. The prompt instructed the AI to conduct a check-up, focusing on mood, stress, sleep, and energy levels, and to administer standardized screening instruments like the PHQ-9 for mood and GAD-7 for anxiety. The AI responded by asking relevant questions and, based on simulated responses indicating mild anxiety, recommended seeking support from a licensed mental health professional.
However, experts caution against relying solely on AI for mental health assessments. Concerns remain about the potential for false positives or false negatives, as well as the risk of AI hallucinations – instances where the AI generates plausible but factually incorrect information. Privacy is also a concern, as AI providers often reserve the right to inspect and utilize user data for training purposes.
The integration of AI into mental healthcare is an ongoing experiment, with both potential benefits and risks. While AI could expand access to care and provide early detection of mental health concerns, it is crucial to address the limitations and potential harms associated with its use. The question remains whether AI can serve as a valuable tool for preventative mental healthcare without compromising the quality and safety of care.