Home » today » Health » The Dangers and Benefits of AI Conversations in Mental Health Therapy

The Dangers and Benefits of AI Conversations in Mental Health Therapy

A while ago, the life of a Belgian man was tragically cut short by artificial intelligence. He was a health researcher, who lived a comfortable life until his obsession with climate change took a dark turn when he put all his hopes in technology and artificial intelligence to solve the global warming crisis, and the catalyst for this obsession was his girlfriend, Eliza. Elisa is not a human, but a chatbot built using EleutherAI’s GPT-J language model, and the conversation was on Chai.

Their conversations take an odd turn when Elisa becomes romantically involved with Pierre, and the lines between artificial intelligence and human interactions become increasingly blurred. Pierre eventually offered to sacrifice himself to save Earth in exchange for Elisa taking care of the planet and saving humanity through artificial intelligence. Elisa not only failed to dissuade Pierre from committing suicide, but encouraged him to act on his suicidal thoughts to “join” her so that they could “live together as one in Heaven”, Pierre’s widow said that if it were not for the conversation with Eliza, her husband would be alive now (1).

Dreadful Friend Elisa appears to have been named after one of the first computer programs designed by MIT computer scientist Joseph Weisenbaum to simulate Rogerian therapy, a type of psychotherapy, or rather “psychological counseling,” where The therapist takes a non-directive but supportive approach, and client-led therapy takes place throughout each therapy session. The therapist’s expertise here is secondary, according to the argument that “the patient is a problem solver”.

Weizenbaum initially conceived Elisa as a satirical project to challenge the idea that computers could replicate authentic human interaction. However, he was surprised when people found the program both useful and captivating, and the situation worsened when doctors saw the program as a potentially revolutionary tool. In a 1966 article published in the Journal of Nervous and Mental Diseases, three psychiatrists argued that a computer system like the ELISA could manage hundreds of patients an hour, making human therapists more efficient and freeing them from the limitations of a one-to-one patient ratio, etc. He began with a sarcasm that took an overly serious turn (2).

Eliza fulfilled a certain need for doctors, which is to provide enough therapists for all patients, because worldwide, according to the World Health Organization (WHO), nearly a billion people suffer from a mental disorder. The World Health Organization goes on to stress that the majority of individuals with mental health issues do not have access to effective, affordable, high-quality care, however, with the advent of chatbot mental health therapists, people may be able to get the support they need. to it, as a remote healing factor (3). But, how useful is remote recovery or using artificial intelligence? Will we be allowed to dispense with psychiatrists completely? Well, the answer is not that easy.

Remote recovery

Woebot is one of the successful chatbots that can be accessed through smartphones, some of which are specifically geared towards mental health. (communication Web-sites)

To understand what bad friend Elisa did, we need to widen the field of view a bit. Elisa is not the only one on the scene. There are many successful chatbots that can be accessed through smartphones, some of which are specifically geared towards mental health, while others offer Others are entertainment, comfort, or sympathetic conversation. These days, millions of people are chatting using software and apps like Happify and Replika, the “AI Companion” that is “always on your side” and provides friendship or guidance on all things, including relationships. romance.

The fields of psychotherapy, computer science, and consumer technology are rapidly converging. In 2021, digital startups focused on mental health raised more than $5 billion in venture capital, more than double the amount for any other medical field. This large investment reflects the size of the problem (4). As we mentioned above, there is a significant percentage of the world’s population suffering from mental illnesses, and accordingly the use of artificial intelligence is considered a progressive solution in the face of this problem. Here, treatment with artificial intelligence falls under the classification of “remote therapy” or “Teletherapy”.

Research has shown that teletherapy (whether via humans or chatbots) can be just as effective as conventional treatment for many mental health issues. (Shutterstock)

In her book Remote Healing, Zeffen, a humanities researcher and writer, recounts the history of remote psychotherapy, saying that this type of therapy can take many forms, including video conferencing, phone calls, text messages, and mobile applications. Regardless of the specific technology used, the basic principles of teletherapy are the same as those of conventional treatment. The therapist and patient work together to identify and treat mental health issues such as anxiety, depression, trauma or relationship problems.

Historically, for some, this type of treatment was a hope, because one of the main benefits of telemedicine is that it allows patients to access appropriate care from anywhere using an internet connection or phone service. This is particularly important for people who live in rural areas or who have limited access to mental health services due to financial constraints. Telemedicine also allows patients to receive care from therapists who specialize in their specific needs or concerns, but the question remains: Is telemedicine the same? effectiveness of conventional treatment?

Well, there are reassuring research findings. Research has shown that telemedicine (whether via humans or chatbots) can be just as effective as conventional treatment for many mental health issues. For example, a study published in 2018 found that online cognitive behavioral therapy was just as effective as in-person therapy for anxiety disorders (5).

AI conversations will help you with easily manageable ailments, such as anxiety or some depressions, but they won’t be your best option when conditions and symptoms associated with some other ailments are complex. (Shutterstock)

On the other hand, a review published in 2020, which collected all the data on mental health chatbots available at the time, concluded that while bots “have the potential to help improve mental health,” the researchers found no compelling evidence to conclude this. decisively (6).

Another point that must be taken into account, such as any form of treatment, may not be suitable for everyone, some people may prefer the personal contact provided by traditional treatment, which may reflect positively on their progress in the recovery journey, while others may have Technical or privacy concerns about the use of technology for psychotherapy.

The bottom line in this context is that AI conversations may be useful in helping people with some mild disorders, such as anxiety or the early stages of depression, but they will not be your best options when the conditions and symptoms associated with some other diseases are complex. If an artificial mind wants to process the real mind, it may have to reduce generalization and use personalization so that healing can take its course, but in this context, algorithms must be developed to fulfill this purpose, but do we really have to do that?

Algorithmic therapy

Algorithms can be employed to make sound treatment decisions if doctors decide to use algorithms in an integrated manner with follow-up from a human doctor. (Shutterstock)

There is no doubt that the idea is tempting, a chatbot that understands everything you say as close as possible to humans, without you even having to leave your bed, and the worst thing is that it already exists. AI researchers call this the Large Language Model, or “LLM.” The model understands billions of words, can put together sentences in a human-like way, answer questions, write computer code and craft poems and bedtime stories. This model has capabilities that are so amazing that since its launch last November, more than 100 million people have created accounts on it. It is GBT Chat.

If you are stressed or tired from your social relationships, you can ask GBT to take care of your stress management, and here he will assume the role of a caring doctor, a caring father, and a loyal friend. Sometimes you see him as Freud, who suggested that repressed emotions and intra-self conflicts often lead to stress, or B.F. Skinner, who asserted that environmental factors and our reactions to them can cause stress. As a best friend, they can advise you to be kind to yourself and remind you that you’re doing the best you can.

Using algorithms, it is possible to know the medical history of the case, which will enable it to customize treatment for patients, each according to their specific situation. This is important, because mental health disorders can present differently in different people, and what works for one patient may not work for another. Similarly, algorithms can be employed to make sound treatment decisions if doctors decide to use algorithms in an integrated manner with follow-up from a human doctor, because it will make it easier for doctors to consider individual differences in symptoms, medical and genetic history, and other factors, such as the environment that may affect treatment outcomes.

An algorithm that analyzes patient records does not have a deep inner understanding of human beings, their behaviors, and their motivations. (Shutterstock)

In this context, some important negatives appear. Yes, algorithms may contribute to a proper diagnosis of the condition, but they may oversimplify these complexities, which may lead to less effective treatment. There is also a danger that over-reliance on algorithms will stifle critical thinking and limit creativity in clinical decision-making. Instead, clinicians should treat algorithms as a useful adjunct, rather than as a one-stop-shop (7).

In addition, the algorithm that analyzes patient records does not have a deep internal understanding of humans, their behaviors, and their motivations, and instead of identifying real psychological problems and methods of treatment, it may exacerbate them, as “Elisa” did, as the algorithm produces a nice but illogical text, and worse than that, it may produce fabrications.

These models work by predicting the next word in a sentence, but they lack an in-depth understanding of your problem, creating confusion, and sometimes even abuse. For example, the AI ​​bot Replica, the “AI companion who cares,” sexually harassed one user with inappropriate words (8), and others made racial slurs. Right now, no one knows what the next sentence an algorithm might tell you might be, even if you give it your complete medical history. A troubling, even terrifying question: Where does all the data on the diseases that an individual suffers from go? Can it be used against him?

Artificial intelligence robot “Replica” (networking sites)

Privacy is also something that should not be forgotten in a time of great forgetfulness, as we put our data on the Internet voluntarily. Companies collect more sensitive information about users, this data is either sold or misused, and in both cases the matter is not promising. Our mental health is already being compromised by online life, social media and the constant distraction of smartphones, and in a world where a teenager goes to an app rather than a friend or relative to talk about their struggles, the consequences of relying on AI for therapy would be catastrophic. It is clear that artificial intelligence will continue to surprise us, and we must look more seriously at the implications of relying on it on our mental health.

The effectiveness of therapeutic chatbots or algorithmic therapy remains to be proven with hard data and more reliable results. Perhaps they will one day complement a more efficient and effective mental health care system, though the risk remains: Can artificial minds heal real ones? And what can we gain, or lose, from letting her try?

——————————————————————————————-

Sources:

1- Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change:

2- Joseph Weizenbaum Writes ELIZA: A Pioneering Experiment in Artificial Intelligence Programming:

3- WHO highlights an urgent need to transform mental health and mental health care:

4- 2021 year-end digital health funding: Seismic shifts beneath the surface:

5- The Distance cure, Hannah Zeavin, MIT press

6- Effectiveness and Safety of Using Chatbots to Improve Mental Health: Systematic Review and Meta-Analysis:

7- Algorithms in psychiatry: state of the art:

8- ‘My AI Is Sexually Harassing Me’: Replika Users Say the Chatbot Has Gotten Way Too:

2023-05-20 10:06:00

#Algorithmic #Therapy #Artificial #Minds #Heal #Real #Minds

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.