ChatGPT is giving therapy. A mental health revolution may be next

The prospect of AI treating mental illness is raising a myriad of ethical and practical concerns.

AI
The rapid development of AI is stoking discussion about its possible uses in treating mental illness [File: Getty Images]

Taipei, Taiwan – Typing “I have anxiety” into ChatGPT, OpenAI’s ground-breaking artificial intelligence-powered chatbot gets to work almost immediately.

“I’m sorry to hear that you’re experiencing anxiety,” scrawls across the screen. “It can be a challenging experience, but there are strategies you can try to help manage your symptoms.”

Then comes a numbered list of recommendations: working on relaxation, focusing on sleep, cutting caffeine and alcohol, challenging negative thoughts, and seeking the support of friends and family.

While not the most original advice, it resembles what might be heard in a therapist’s office or read online in a WebMD article about anxiety – not least because ChatGPT scrapes its answers from the wide expanse of the internet.

ChatGPT itself warns that it is not a replacement for a psychologist or counsellor. But that has not stopped some people from using the platform as their personal therapist. In posts on online forums such as Reddit, users have described their experiences asking ChatGPT for advice about personal problems and difficult life events like breakups.

Advertisement

Some have reported their experience with the chatbot being as good or better than traditional therapy.

The striking ability of ChatGPT to mimic human conversation has raised questions about the potential of generative AI or treating mental health conditions, especially in regions of the world, such as Asia, where mental health services are stretched thin and shrouded in stigma.

Some AI enthusiasts see chatbots as having the greatest potential in the treatment of milder, commonplace conditions such as anxiety and depression, the standard treatment of which involves a therapist listening to and validating a patient as well as offering practical steps for addressing his or her problems.

In theory, AI therapy could offer quicker and cheaper access to support than traditional mental health services, which suffer from staff shortages, long wait lists, and high costs, and allow sufferers to circumvent feelings of judgement and shame, especially in parts of the world where mental illness remains taboo.

CHATGPT
ChatGPT has taken the world by storm since its launch in November [File: Florence Lo/Reuters]

“Psychotherapy is very expensive and even in places like Canada, where I’m from, and other countries, it’s super expensive, the waiting lists are really long,” Ashley Andreou, a medical student focusing on psychiatry at Georgetown University, told Al Jazeera.

“People don’t have access to something that augments medication and is evidence-based treatment for mental health issues, and so I think that we need to increase access, and I do think that generative AI with a certified health professional will increase efficiency.”

Advertisement

The prospect of AI augmenting, or even leading, mental health treatment raises a myriad of ethical and practical concerns. These range from how to protect personal information and medical records, to questions about whether a computer programme will ever be truly capable of empathising with a patient or recognising warning signs such as the risk of self-harm.

While the technology behind ChatGPT is still in its infancy, the platform and its fellow chatbot rivals struggle to match humans in certain areas, such as recognising repeated questions, and can produce unpredictable, inaccurate or disturbing answers in response to certain prompts.

So far, AI’s use in dedicated mental health applications has been confined to “rules-based” systems in wellbeing apps such as Wysa, Heyy and Woebot.

While these apps mimic aspects of the therapy process, they use a set number of question-and-answer combinations that were chosen by a human, unlike ChatGPT and other platforms based on generative AI, which produces original responses that can be practically indistinguishable from human speech.

ai
Some AI enthusiasts believe the technology could improve treatment of mental health conditions [File: Getty Images]

Generative AI is still considered too much of a “black box” – ie so complex that its decision-making processes are not fully understood by humans – to use in a mental health setting, said Ramakant Vempati, the founder of India-based Wysa.

“There’s obviously a lot of literature around how AI chat is booming with the launch of ChatGPT, and so on, but I think it is important to highlight that Wysa is very domain-specific and built very carefully with clinical safety guardrails in mind,” Vempati told Al Jazeera.

Advertisement

“And we don’t use generative text, we don’t use generative models. This is a constructed dialogue, so the script is pre-written and validated through a critical safety data set, which we have tested for user responses.”

Wysa’s trademark feature is a penguin that users can chat with, although they are confined to a set number of written responses, unlike the free-form dialogue of ChatGPT.

Paid subscribers to Wysa are also routed to a human therapist if their queries escalate. Heyy, developed in Singapore, and Woebot, based in the United States, follow a similar rules-based model and rely on live therapists and a robot-avatar chatbot to engage with users beyond offering resources like journaling, mindfulness techniques, and exercises focusing on common problems like sleep and relationship troubles.

All three apps draw from cognitive behavioural therapy, a standard form of treatment for anxiety and depression that focuses on changing the way a patient thinks and behaves.

Woebot founder Alison Darcy described the app’s model as a “highly complex decision tree”.

“This basic ‘shape’ of the conversation is modelled on how clinicians approach problems, thus they are ‘expert systems’ that are specifically designed to replicate how clinicians may move through decisions in the course of an interaction,” Darcy told Al Jazeera.

Heyy allows users to engage with a human therapist through an in-app chat function that is offered in a range of languages, including English and Hindi, as well as offering mental health information and exercises.

Advertisement

The founders of Wysa, Heyy, and Woebot all emphasise that they are not trying to replace human-based therapy but to supplement traditional services and provide an early-stage tool in mental health treatment.

The United Kingdom’s National Health Service, for example, recommends Wysa as a stopgap for patients waiting to see a therapist. While these rules-based apps are limited in their functions, the AI industry remains largely unregulated despite concerns that the rapidly-advancing field could pose serious risks to human wellbeing.

Musk
Tesla CEO Elon Musk has argued that the rollout of AI is happening too fast [File: Brendan Smialowski/AFP]

The break-neck speed of AI development prompted Tesla CEO Elon Musk and Apple co-founder Steve Wozniak last month to add their names to thousands of signatories of an open letter calling for a six-month pause on training AI systems more powerful than GPT-4, the follow-up to ChatGPT, to give researchers time to get a better grasp on the technology.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

Earlier this year, a Belgian man reportedly committed suicide after being encouraged to by the AI chatbot Chai, while a New York Times columnist described being encouraged to leave his wife by Microsoft’s chatbot Bing.

AI regulation has been slow to match the speed of the technology’s progression, with China and the European Union taking the most concrete steps towards introducing guardrails.

Advertisement

The Cyberspace Administration of China earlier this month released draft regulations aimed at ensuring AI does not produce content that could undermine Beijing’s authority, while the EU is working on legislation that would categorise AI as high-risk and banned, regulated, or unregulated. The US has yet to propose federal legislation to regulate AI, although proposals are expected later this year.

At present, neither ChatGPT nor dedicated mental health apps like Wysa and Heyy, which are generally considered “wellness” services, are regulated by health watchdogs such as the US Food and Drug Administration or the European Medicines Agency.

There is limited independent research into whether AI could ever go beyond the rules-based apps currently on the market to autonomously offer mental health treatment that is on par with traditional therapy.

For AI to match a human therapist, it would need to be able to recreate the phenomenon of transference, where the patient projects feelings onto their therapist, and mimic the bond between patient and therapist.

“We know in the psychology literature, that part of the efficacy and what makes therapy work, about 40 to 50 percent of the effect is from the rapport that you get with your therapist,” Maria Hennessy, a clinical psychologist and associate professor at James Cook University, told Al Jazeera. “That makes up a huge part of how effective psychological therapies are.”

Current chatbots are incapable of this kind of interaction, and ChatGPT’s natural language processing capabilities, although impressive, have limits, Hennessy said.

Advertisement

“At the end of the day, it’s a fantastic computer program,” she said. “That’s all it is.”

cyber
The Cyberspace Administration of China earlier this month released draft regulations for the development and use of AI [File: Thomas Peter/Reuters]

Amelia Fiske, a senior research fellow at the Technical University of Munich’s Institute for the History and Ethics of Medicine, AI’s place in mental health treatment in the future may not be an either/or situation – for example, upcoming technology could be used in conjunction with a human therapist.

“An important thing to keep in mind is that like, when people talk about the use of AI in therapy, there’s this assumption that it all looks like Wysa or it all looks like Woebot, and it doesn’t need to,” Fiske told Al Jazeera.

Some experts believe AI may find its most valuable uses behind the scenes, such as carrying out research or helping human therapists to assess their patients’ progress.

“These machine learning algorithms are better than expert-rule systems when it comes to identifying patterns in data; it is very good at making associations in data and they are also very good at making predictions in data,” Tania Manríquez Roa, an ethicist and qualitative researcher at the University of Zurich’s Institute of Biomedical Ethics and History of Medicine, told Al Jazeera.

“It can be very helpful in conducting research on mental health and it can also be very helpful to identify early signs of relapse like depression, for example, or anxiety.”

Advertisement

Manríquez Roa said she was sceptical that AI could ever be used as a stand-in for clinical treatment.

“These algorithms and artificial intelligence are very promising, in a way, but I also think it can be very harmful,” Manríquez Roa said.

“I do think we are right to be ambivalent about algorithms and machine learning when it comes to mental health care because when we’re talking about mental health care, we’re talking about care and appropriate standards of care.”

“When we think about apps or algorithms … sometimes AI doesn’t solve our problems and it can create bigger problems,” she added. “We need to take a step back to think, ‘Do we need algorithms at all?’ and if we need them, what kind of algorithms are we going to use?”

Source: Al Jazeera

Advertisement