People are using AI for therapy, even though ChatGPT wasn’t built for it

Unlike human therapists, chatbots never get tired, and they are inexpensive by comparison. PHOTO: REUTERS

SAN FRANCISCO – Mr Milo Van Slyck missed an appointment with his therapist in early April, so he decided to try something new: telling ChatGPT about his problems.

As Mr Van Slyck, a paralegal in Charleston, South Carolina, typed, he found that he felt comfortable discussing a range of deeply personal issues.

He told OpenAI’s chatbot about his fears and frustrations as a transgender man at a time when transgender rights are under attack in much of the United States. He mentioned conflict with his parents, who are not supportive of his gender identity, and his preparations for an upcoming visit.

“When it comes to seeing your parents again, it’s important to prioritise your own needs and wellbeing,” the chatbot responded. “Consider what you need in order to feel comfortable and safe in their presence. It’s okay to set boundaries about how much time you spend with them, what topics of conversation are off-limits, and how they address you and your identity.”

In the days that followed, Mr Van Slyck got in the habit of typing in a few messages at a time when he needed to vent, and he began to feel that ChatGPT’s responses offered an emotional release.

He said he often feels like a burden to other people, even his therapist, but he never feels like he is imposing on the chatbot.

“It provided what you would want to hear from a friend or a supporter in your life,” said Mr Van Slyck. “Just that encouragement that sometimes you just want to hear from someone else – something, in this case.”

Promises and pitfalls

These are still the early days of a new generation of artificial-intelligence-powered chatbots, and while millions of people are playing around with ChatGPT and other bots, it is still unclear what uses will endure beyond the novelty stage.

People are using them to search the Internet, cheat on their homework, write software code and make restaurant reservations.

Bloomberg Businessweek also recently spoke with a handful of people who, like Mr Van Slyck, have begun using ChatGPT as a kind of robo-therapist.

The idea of using a chatbot in a therapeutic or coachlike manner is not without precedent. In fact, one of the earliest chatbots, Eliza, was built in the 1960s by Dr Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology, to imitate a therapist.

Several chatbots, such as Woebot and Wysa, now focus on mental health. Unlike human therapists, chatbots never get tired, and they are inexpensive by comparison.

But there are also potential pitfalls.

Powerful general-use chatbots, including ChatGPT, Google’s Bard and Microsoft’s OpenAI-powered Bing Chat, are based on large language models, a technology with a well-documented tendency to simply fabricate convincing-sounding information.

General-use chatbots are not designed for therapy and have not been programmed to conform to the ethical and legal guidelines human therapists observe.

In their current form, they also have no way to keep track of what users tell them from session to session, a shortcoming that most patients probably would not tolerate from their human therapists.

“If somebody has a serious mental illness, this thing is not ready to replace a mental health professional,” said Dr Stephen Ilardi, a clinical psychologist and professor at the University of Kansas who studies mood disorders. “The risk is too high.”

He described ChatGPT as “a bit of a parlour trick”.

It has its uses

Still, he thinks it is a good enough conversation partner that many people may find it helpful.

A spokesperson for San Francisco-based OpenAI declined to comment on people using the chatbot in therapeutic ways but pointed to its policy stating that people should “never use our models to provide diagnostic or treatment services for serious medical conditions.”

When Mr Van Slyck has interacted with ChatGPT, it sometimes warns him that it is not a therapist – while also seeming to invite him to keep using it as one.

He recounts telling the chatbot about a Twitter post he saw that described the product as more effective than in-person therapy.

“It’s important to note that online resources can be helpful, but they are not a replacement for seeking professional help if you are dealing with trauma or mental health issues,” ChatGPT responded.

“That being said,” it added, “if you have specific questions or concerns that you would like me to provide information on or insights into, I will do my best to help you.”

Mr Van Slyck, who has been in in-person therapy for years, said he does not plan to stop seeing his human therapist and would consult her about any decisions ChatGPT points him toward before acting on them.

“So far, to me, what it has suggested to me has all seemed like very reasonable and very insightful feedback,” he said.

Dr Ilardi said with the right guardrails, he can imagine ChatGPT being adapted to supplement professional care at a time when demand for mental health services outstrips supply.

Dr Margaret Mitchell, chief ethics scientist at Hugging Face, a company that makes and supports AI models, thinks such chatbots could be used to assist people who work at crisis helplines increase the number of calls they can answer.

Just too weird

But Dr Mitchell is also concerned that people seeking therapy from chatbots could aggravate their suffering without realising in the moment that is what they are doing.

“Even if someone is finding the technology useful, that doesn’t mean that it’s leading them in a good direction,” she said.

Dr Mitchell also raised potentially troubling privacy implications.

OpenAI reviews users’ conversations and uses them for training, which might not appeal to people who want to talk about extremely personal issues. (Users can delete their accounts, though the process can take up to four weeks.)

In March, a glitch led OpenAI to briefly shut down ChatGPT after receiving reports that some users could see the titles of other users’ chat histories.

Privacy concerns aside, some people may find robo-therapy just too weird.

Mr Aaron Lawson, a project manager at an electrical engineering company in San Diego, has enjoyed experimenting with ChatGPT and tried to get it to assume the role of a relatable therapist.

Its responses sounded human enough, but Mr Lawson could not get over the fact that he was not talking to a real person.

“I told it to play a role,” he said. “I’m having trouble playing along with it.”

‘Humans don’t scale’

Mr Emad Mostaque, on the other hand, said at a conference in March that he was using GPT-4–the most powerful model OpenAI offers to the public–every day.

Mr Mostaque, founder and chief executive officer of Stability AI, which popularised the Stable Diffusion image generator, described the chatbot as “the best therapist”.

In a follow-up interview, Mr Mostaque said he has constructed prompts that he feeds into the chatbot to get it to behave more like a human therapist.

He said he chats with it about a range of topics: how to handle the stresses of leading a young AI company (particularly as someone who is public about his neurodivergence and attention-deficit/hyperactivity disorder), how to prioritise different aspects of his life, how to deal with feeling overwhelmed.

In return, he said, it generates “good, very reasonable advice”, such as coping mechanisms for balancing his life more effectively.

Mr Mostaque said he does not see chatbots as a substitute for human therapists, but he thinks they can be helpful when you need to talk but have no one to talk to.

“Unfortunately, humans don’t scale,” he said. BLOOMBERG

Join ST's Telegram channel and get the latest breaking news delivered to you.