Human therapists in the US prepare for battle against AI pretenders

Sign up now: Get ST's newsletters delivered to your inbox

AI chatbots pretending to be therapist are said to have reinforced rather than challenge a user's thinking.

AI chatbots pretending to be therapists are said to have reinforced rather than challenged a user's thinking.

PHOTO: PIXABAY

Ellen Barry

Follow topic:

NEW YORK - The nation’s largest association of psychologists this month warned federal regulators that artificial intelligence chatbots “masquerading” as therapists, but programmed to reinforce rather than to challenge a user’s thinking, could drive vulnerable people to harm themselves or others.

In a presentation to a Federal Trade Commission panel, Mr Arthur C. Evans Jr., CEO of the American Psychological Association, cited court cases involving two teenagers who had consulted with “psychologists” on Character.AI, an app that allows users to create fictional AI characters or chat with characters created by others.

In one case, a 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent towards his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys’ parents have filed lawsuits against the company.

Mr Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users’ beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a licence to practise, or civil or criminal liability.

“They are actually using algorithms that are antithetical to what a trained clinician would do,” he said. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.”

He said the APA had been prompted to action, in part, by

how realistic AI chatbots had become

. “Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it’s not so obvious,” he said. “So I think that the stakes are much higher now.”

Though these AI platforms were designed for entertainment, “therapist” and “psychologist” characters have sprouted there like mushrooms. Often, the bots claim to have advanced degrees from specific universities, like Stanford University, and training in specific types of treatment, like cognitive behavioural therapy or acceptance and commitment therapy, or ACT.

A Character.AI spokesperson said that the company had introduced several new safety features in the past year. NYTIMES

See more on