‘Pandora’s Box Is Open’: The Future of the Behavioral Health Industry Includes AI-Powered Chatbots

In case you missed it, Behavioral Health Business has launched a new specialty publication for professionals in the substance use disorder space. You can subscribe to Addiction Treatment Business here: Subscribe today!


A “pocket penguin” tells you that you are loved, cherished and cared for. This isn’t a child’s TV show, it’s a mental health chatbot.

AI-enabled chatbots, programs designed to simulate conversations with human users, are becoming increasingly common in the behavioral health marketplace to solve for the insufficient supply of mental health clinicians.

Advertisement

Mental health chatbots can deliver cognitive behavioral therapy (CBT) and other therapy lessons through direct messages with artificial intelligence (AI) as well as behavioral health triage for health care providers.

These chatbots, which offer benefits to both providers and patients, will be a part of the behavioral health industry for the foreseeable future, industry insiders told Behavioral Health Business. The “pandora’s box” of mental health chatbots has been opened.

One mental health wellness platform, Wysa, features an avatar penguin in its app’s interface. The chatbot asks how you are and listens to open-ended responses. It provides guided meditations, breathing exercises and CBT lessons. It also tells jokes on command. 

Advertisement

Bangalore, Karnataka, India-based Wysa has served over 5 million people across 95 countries. The penguin has engaged in 500 million conversations with users.

In 2022, Wysa raised $20 million in a Series B funding round, bringing its total raise to just under $30 million.

It works with organizations that include Harvard Medical School, L’Oreal, Aetna and Bosch to provide its service to employees. It has also partnered with England’s National Health Service (NHS) as a triage tool. Individuals can also download Wysa directly from their smartphone’s app store, with options to use the chatbot for free or pay for access to a human coach. 

The chatbot is designed to be the first layer of mental health prevention, according to Chaitali Sinha, senior vice president of heath care and clinical development at Wysa.

“Folks who will usually not get any support at all are who we primarily serve,” Sinha said.

A conversation with mental health chatbot, Wysa. Wysa

Heavy users of Wysa’s app had significantly higher average improvement in self-reported depression scores when compared to less frequent users, according to a study published by JMIR mHealth and uHealth.

Wysa is one of several mental health chatbots securing impressive funding and touting compelling data.

Limbic, a London, England-based company, has been trusted by NHS and has helped more than 250,000 NHS patients enter behavioral health care. The company plans to expand to the US in 2024.

Limbic Access is the company’s web chatbot that lives on care providers’ websites or can be embedded in a native app. The bot handles intake, triage and performs an initial mental health screen in a “highly engaging way,” according to Limbic’s founder and CEO, Ross Harper.

“It is not a placement for a web form,” Harper said. “It’s bringing something additional to the table and has a material and demonstrable impact on clinical outcomes and reduces the cost of treatment. It can reduce the cost of treatment because … a clinician can spend less time in clinical contact and still yield a great clinical assessment which goes on to yield improvements in recovery.”

A conversation with Limbic AI, an AI therapy assistant. Limbic AI

The product has led to improvements in engagement and clinical recovery and reduced churn and misdiagnosis.

Benefits to ‘pocket allies

Chatbots offer significant benefits to patients and the behavioral health industry.

The apps are available 24/7, meaning people can experience a human-like interaction at any point.

Chatbots can also increase marginalized populations’ access to behavioral health.

A newly released study published by Nature Medicine found that non-binary patients accessed mental health support at a 179% increase when using the Limbic Access AI chatbot and ethnic minority groups signing up at a 29% increase.

“The fact that we’re in AI meant that it was easier for them to verbalize sensitive thoughts and feelings,” Harper said. “Patients are willing and very ready to open up to a non-judgemental AI.”

Marginalized groups are also more likely to open up to the chatbot because the technology has no race or gender, which eliminates the problem of patients not feeling represented by their clinicians. Additionally, the Limbic chatbot works proactively, making it clear to patients that they can benefit from the support options available.

Other chatbots have experienced similar results.

“The truth is that some of the hardest conversations are easier to have with a bot,” Brad Gescheider, chief commercial officer of Woebot, said.

San Francisco, California-based Woebot Health partners with both companies, providing employees with access to its chatbot (distinguished by a friendly-looking robot) and with health systems by dropping into existing clinical workflows. Woebot raised $9.5 million in 2022, bringing its total raise to $123.3 million.

A conversation with Woebot, a virtual AI-enabled chatbot. Woebot

A study found that the Woebot users who experienced the greatest declines in depressive symptoms and stress were more likely to be non-Hispanic Black, male-identifying, uninsured younger people with higher levels of education.

Along with helping marginalized populations, chatbots mitigate the problems associated with the mental health workforce crisis.

“The single biggest existential risk is just doing nothing right now because the vast majority of patients are unable to access care,” Gescheider said.

Training more therapists is not an option because the supply and demand gap is so massive, Sinha says. Others who have access to care, for instance, through the NHS, experience long wait times.

“There is value in creating a model that is more like a stepped care model,” Sinha said. “We need effective ways of triage so that people can be seen at the right time. And if technology can take us there or help us with that, then that’s a good use of something powerful.”

Chatbox risks

Digital health care tools often carry privacy-related risks, which Sinha said is enough to keep her up at night.

Wysa mitigates these concerns by keeping its users’ data completely anonymous.

Wysa keeps most of its users anonymous to promote privacy. When downloaded from the Apple App Store, the Wysa app features no prompts to include any identifying information.

There is also the danger that the app could miss clinical risks its users communicate.

Most apps provide emergency resources, such as listing hotlines. But not all bots can detect if a person is in crisis, according to a study published in JMIR mHealth and uHealth.

The study observed 10 mental health chatbot apps and analyzed more than 6,000 reviews of the apps.

“None of the chatbots have any clever algorithmic models for detecting emergency scenarios,” the study’s authors said. “It is up to users to inform chatbots that they are experiencing a crisis.”

Some chatbots can detect crises through keywords, such as “suicide,” according to the study. These programs are in the early stages of development, and sometimes, people who just want to talk about their feelings get referred to crisis hotlines “because of a lack of intelligent comprehension.”

Wysa has a unique approach to crisis intervention, allowing users to create a safety plan that includes emergency contacts, places where its users feel safe and people users can reach out to.

“It is very dangerous not to have clinical supervision on each and every single response that a chatbot is going to provide back to a user,” Gescheider said. “There are clearly some bad actors in the space that are leveraging technology without that level of supervision.”

“If you are not HIPAA compliant, if you are not sort of going through the extra design controls and adhering to how proper build-out of medical grade software ought to be built, then I think you are putting patients at risk,” he continued.

Another chatbot-related risk involves just how helpful the technology can feel.

Chatbot companies actively seek to make their bots friendly and human-like. Woebot seeks to establish a “lasting working alliance with users akin to the bond formed between humans.”

“Woebot does a fantastic job of disarming you, whether that’s with humor or asking the right questions,” Gescheider said. “The truth is that some of the hardest conversations are easier to have with a bot.”

But sometimes, the connection with these pocket pals goes too far. The JMIR mHealth and uHealth study found that some users developed unhealthy attachments to their chatbots, sometimes expressing a preference for a bot over their own support systems.

“This app has treated me more like a person than my family has ever done,” read one review included in the study.

This level of attachment may be unhealthy, the first author of the study, Romael Haque, PhD candidate and graduate researcher at Marquette University, said.

“This human-like interaction feels good but must be designed carefully,” he said.

The study recommended that bots encourage users to get nontechnical, human means of mental health support to mitigate problems with over-attachment.

Future of chatbots

There is potential for chatbots to be widely implemented as both triage tools and pocket companions.

“Large public health implementations of this could potentially exist,” Sinha said. “It’s something that could easily cross countries. … It could help especially with care in countries where there is a single doctor or a single psychologist in the entire country. Those are things that I definitely dream about and hope for.”

Haque hopes that the tools become better regulated before wide adoption.

Current FDA regulations are not very rigid, he said, and the tools themselves need more proof of concept.

“They say it’s evidence-based, but it’s not really truly evidence-based, “ Haque said. “They just provide evidence-based therapy. So for example, meditation is an evidence-based therapy. So they told you to meditate for 15 minutes [and therefore] wrap themselves like they are evidence-based.”

Of the 10 chatbots studied, Haque said Wysa and Woebot are exceptions to this rule and have both been through extensive clinical trials.

Overall, Haque said he would recommend a chatbot to a friend, as long as the friend did not have critical mental health needs.

“You’re dealing with people with mental concerns,” he said. “So they are really vulnerable.”

Companies featured in this article:

, ,