Between 40% and 60% of people with substance use disorder (SUD) relapse after detox and rehabilitation. Providers are eyeing artificial intelligence (AI) as a key to help improve that statistic.
The technology, which is not without its risks, has already been used for years in behavioral health treatment. In the SUD field, machine-learning tech is being used to improve the connection between people struggling with SUD and care teams. SUD providers say the technology may add a human element to recovery support.
“We’ve had an issue of lack of insight into what’s going on between treatment and relapse for so long. We’ve thrown a lot of human capital and human hours at it, and haven’t moved the needle as much as we should,” Brett Talbot, chief clinical officer and co-founder of Videra, told Behavioral Health Business. “If we want to continue to address this problem, we have to leverage technology and automation to scale that interaction.“
Orem, Utah-based Videra Health operates an AI-powered video assessment platform used to triage patients at intake or between visits.
Recently, Videra worked with Discovery Behavioral Health, one of the largest mental health and SUD treatment providers in the nation, to launch a new AI-driven platform for SUD treatment called Discovery365.
The platform is designed to prevent behavioral health relapses within the first year after treatment, when more than 85% of patients relapse and return to drug use.
Discovery365, which is used for SUD treatment as well as other behavioral health conditions, is driven by tech created Videra. More than 2,500 patients are currently using the platform across 140 Discovery locations.
Patients get a text, email or push notification from Discovery365 two days after being discharged from a Discovery care center. Most patients receive the link via text, which includes a HIPAA-compliant link where they are prompted to answer a few questions, and then respond to an open-ended question by recording a video of themselves.
Then the AI “magic” comes in, according to Talbot.
“Patients don’t want to be interrogated or reduced down to just multiple choice questions,” Talbot said. “When we actually allow them to respond openly on video, they actually tell us a lot about what’s going on. They tell us things that we wouldn’t have thought to ask from week to week or month to month.”
The AI technology screens the videos submitted by patients and monitors things like speech patterns, language and movement. The data, which can summarize the tendencies of a specific population as well as individuals, is designed to recognize the signs of a relapse or struggle.
“Technology can take a step back and reflect and say, actually, when you relapsed before you were talking, motioning, behaving in a very similar way,” Matthew Ruble, chief medical officer of Discovery Behavioral Health, told BHB. “That lets us know that you’re at risk.”
Discovery Behavioral Health has more than 165 clinics and treats eating disorders, mental health conditions and substance use disorder.
Human providers can use the information gathered via AI and reach out asynchronously with support and resources to prevent relapses before they occur.
Current applications of AI
Behavioral health providers have already leveraged AI for data analysis, translations and generating summaries from patient notes.
Among the companies already utilizing AI is Marigold, a peer-support app for people in recovery that uses natural language processing, a type of AI.
Shrenik Jain, founder and CEO of Boston-based Marigold, describes the platform as “the best parts” of Alcoholics Anonymous and Reddit. Members participate in 24/7 support groups, moderated by a small number of certified peer specialists.
“The core concept is that peer support is how people get better and how people have always gotten better,” Jain told BHB. “Everybody gets better by building supportive relationships, being able to talk about their problems and building healthy habits.”
The AI element of Marigold keeps an eye on what members say to each other. The technology can alert staff when phrases or words that may indicate a problem, like housing insecurity, depression or substance use with the goal of reducing inpatient and rescue care.
“The reason the service is engaging is because people want to connect with each other,” Jain told BHB. “For us, the natural language processing, the AI that we use, was about [preserving] the peer-to-peer element of the groups that make them engaging while offering consistent moderation quality.”
Uses for AI in the SUD field will only continue to expand.
“The sky’s the limit,” Jain said.
Benefits
One key benefit of AI use in the behavioral health space is the reduction of human workload.
“This field is always under stress in terms of staffing,” Srinivasan Rajan, data governance manager at GAVS Technologies, told BHB. “It’s not replacing [clinicians], it’s about supporting them. If AI is effectively used, clinics could reduce their workload.”
GAVS Technologies is a global provider of IT services and solutions.
Marigold exemplifies this reduction in workload.
“We use NLP to automate a lot of the moderation,” Jain said. “There’s a half dozen peer coaches total at Marigold – and we’re in four states. The plan is to be in all 50 with a couple dozen [peer coaches], so it’s a tremendous force multiplier.”
The behavioral health industry has spent a lot of human hours working to gain insight into what happens between treatment and relapse, Talbot said. Despite that, relapse rates have not sufficiently improved.
Discovery would have previously required a “massive human team and massive amounts of money” to make the work they are currently doing with Discovery365 possible. The ability to provide that level of care is “groundbreaking,” Talbot said.
Potential pitfalls
The groundbreaking technology is not without its risks. Security, privacy and bias are discussion topics for AI adaptors, but patients do not seem too concerned.
For Marigold, new members are educated about how the platform works and sign a consent to services. Getting people to use the platform has not been an issue for the company once people see its potential benefits.
Discovery365 also said patients have not voiced concerns about privacy or security. Users are educated about its safety and security and find the technology interesting and beneficial.
“Security is always something that we discuss, and privacy as well,” Talbot said. “But when you think about it, people are willing to do a blood test. They will let somebody put a needle in them and draw blood from their body so that they have a better understanding of their health. So it’s not too far to think that they would be willing to record a video and let AI do some analytics if they know it could benefit them.”
One of the largest concerns about AI is bias.
The technology is taught to look at patterns, which can create biases about larger populations that may not be accurate for an individual.
“There’s a lot of stereotyping and bias in terms of prediction algorithms,” Rajan said. “These entities have to ensure that they are very responsible in a system so that they can explain it back to the therapist and the counselors who are not technical. [AI experts should explain to clinicians] what this algorithm is all about, how this algorithm works, which parameters they use, what kind of bias it may have and how to handle these biases.”
Talbot agreed that bias is among the most significant concerns regarding AI.
“We don’t release anything on the platform for analytics that has not been tested for bias,” he said. “That is something we look at very specifically to make sure that we’re not making any descriptive, predictive or interpretation of AI metrics that aren’t accurate to somebody’s personal experience.”
For Discovery, leveraging AI is worth the potential risks.
“It’s always an equation of risks and benefits,” Ruble said. “We’re in the middle of a suicide epidemic, in the middle of opiate, Fentanyl and overdose epidemic. So there would have to be some real risk to counter all the very real threats and death that’s going on right now.”
The human element
While behavioral health providers have leveraged AI to help reduce paperwork and identify patient risks, few have used it for clinical decision support. Humans are essential for counseling patients and making decisions regarding treatment plans.
“Don’t jump into it just by the buzzwords without understanding the role of the human therapist and other people in this field,” Rajan said. “Deep learning algorithms can be useful, but not in any way replacing the human.”
Industry insiders said AI technology should be used to support humans rather than replace them.
For Discovery and Videra, AI technology actually adds a human element to treatment because it does not simply reduce patients to data derived from a multiple-choice questionnaire.
“The value-based care movement has left out behavioral health or more precisely, behavioral health has excluded itself,” Ruble said. “That’s because, overwhelmingly, nobody has objective measures of how people are doing. This technology allows us, in a humane and human way, to quantify some things that are usually pretty subjective.”