Talkspace (Nasdaq: TALK). CEO Dr. Jon Cohen said Wednesday that artificial intelligence would increasingly support its therapists’ work.
While he declined to go into details, teasing additional announcements in the future, he articulated some bright lines for Talkspace during the conference call.
“Our approach on this is not that [artificial intelligence such as] Chat GPT will provide a substitute in any way for the therapist,” Cohen said on the Talkspace annual earnings call. “What it can do is help direct the therapist in ways to provide better therapy for their patients.
“That’s where I see as the next obvious place for artificial intelligence to begin to have an impact to improve, not just therapy, but the performance of the therapist, to improve the quality of the product that we’re delivering to people.”
ChatGPT is a generative chatbot developed by the San Francisco-based company OpenAI. It has garnered praise and criticism for its ability to understand and respond with conversational language but has experienced issues with presenting factual information.
The use of artificial intelligence in front-line mental health is not new. But its use often sparks controversy while businesses that use it rake in venture capital investment.
In July 2022, digital mental health chatbot company Wysa raised $20 million in a Series B funding round led by HealthQuad. Additionally, San Francisco-based Woebot Health, which uses artificial intelligence (AI) and a relational agent, has raised $123 million, according to Crunchbase.
Talkspace is known for being an early mover in telehealth for mental health, especially for using text messages to provide asynchronous therapy. It was founded in 2012 and has always leaned into pairing telehealth services with new technology.
Talkspace and artificial intelligence
Already, Talkspace leans on artificial intelligence to track the performance of its therapists, assess therapist efficacy at onboarding, match therapists to patients and aid in therapists’ engagement with patients between sessions.
Using a “purpose-built technology platform” is core to Talkspace as a business. A key element of its value proposition is using technology to realize a market opportunity to eliminate “burdens associated with traditional face-to-face mental health services.”
“We believe this market opportunity exists due in part to structural limitations in the traditional behavioral healthcare model such as slow adoption of technology to treat and monitor patients, reactive-to-care delivery that can lead to inconsistent outcomes, difficulties quantifying outcomes, and lack of reimbursement and insurance coverage leading to misaligned incentives,” Talkspace said in a public financial filing. “We believe that virtual therapy offers an attractive opportunity to improve behavioral health through data science and machine learning.”
Talkspace plans on using digital phenotyping and predictive modeling, and the data imprint left by interaction on its platforms to open new quantitative viewpoints into a member’s behavioral health, the filing states said.
As of 2021, Talkspace’s aggregated 5 billion words sent via 100 million anonymized patient text messages; over 1 million psychological assessments; 500,000 diagnoses and 800,000 therapists’ notes; and over 800,000 therapists ratings, according to the filing.
This data set is used to assess and develop strategies to improve care outcomes and drive other efficiencies. However, the aggregation and assessment of data at Talkspace has prompted controversy in the past. Many therapists on social media expressed outrage that companies would use technology to come through their therapy sessions and notes, seeing it as a breach of trust and violative of privacy.
Data, tools, public perception
As with Talkspace and other companies, most artificial intelligence tools aimed to understand and respond to text or voice data. And like with Talkspace, many seek to bring greater degrees of objectivity to the practice of behavioral health and to how it is valued and reimbursed.
AI-backed tools are also a potential fix for administrative and efficiency challenges that beleaguer behavioral health professionals. Companies such as Eleos Health and Suki AI Inc. seek to automate or create voice-based administrative-process tools.
Despite the promise of better and more efficient care, the controversy around artificial intelligence in behavioral health may be moot if consumers out and out reject it.
As many as 60% of consumers would not be comfortable with their health provider relying on artificial intelligence, according to recent polling data from Pew Research Center.
Things are even starker for mental health care.
“Public reactions to the idea of using an AI chatbot for mental health support are decidedly negative,” the Pew report states.
About 79% of participants said they definitely or probably would not want to use an artificial intelligence chatbot if they sought mental health support. Further, 46% of respondents said that only those seeing a therapist should use artificial intelligence chatbots, while another 28% said mental health chatbots should not be available at all.
“Even among Americans who say they have heard about these chatbots prior to the survey, 71% say they would not want to use one for their own mental health support.”