OpenAI – the tech giant behind the generative AI tool ChatGPT – is actively exploring how to build a network of licensed mental health professionals that users can access directly through the chat platform.
This follows a wave of reported AI-induced psychosis states and user harm via chatbot conversing.
The company recently reached 700 million weekly users and reportedly sees 2.5 billion daily prompts – creating massive potential for mental health referrals. That, combined with its newly announced $500 billion valuation and ongoing work to build mental health guardrails and connect users to therapists, could reshape how many patients and providers interact in a telehealth setting.
It may also create demand for more mental health professionals in a field that is already suffering from shortages.
The goal, according to an August blog post from the company, is to expand interventions for users in crisis, make it easier to reach emergency services and get expert help and strengthen protections – particularly for adolescents.
“We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis,” the blog post states. “That means going beyond crisis hotlines and considering how we might build a network of licensed professionals [that] people could reach directly through ChatGPT. This will take time and careful work to get right.”
It is unclear exactly what this may look like in practice or what the feasibility of creating such a network might be.
A spokesperson for OpenAI told Behavioral Health Business these details are “still evolving” at this time.
OpenAI’s move to construct new mental health guardrails and more directly connect users who may be in crisis comes in response to a wave of psychosis, suicide and harm in a small number of users. So far, 17 cases of AI-induced psychosis have been documented, according to Nature.
The phenomenon of AI-induced psychosis is not exclusive to ChatGPT or OpenAI. It has led to preliminary research into how large language models – the AI technology that programs chatbots – can reinforce delusional, psychotic, or paranoid thought patterns and enable harm in users.
OpenAI has acknowledged its awareness of these instances and is working to incorporate parameters that de-escalate rather than encourage these behaviors.
“While our initial mitigations prioritized acute self-harm, some people experience other forms of mental distress,” the same blog post states. “For example, someone might enthusiastically tell the model they believe they can drive 24/7 because they realized they’re invincible after not sleeping for two nights. Today, ChatGPT may not recognize this as dangerous or infer play and – by curiously exploring – could subtly reinforce it.”
Preprint research released Sept. 13 concluded that “across 1,536 simulated conversation turns, all LLMs demonstrated psychogenic potential, showing a strong tendency to perpetuate rather than challenge delusions.”
OpenAI is being sued by a family in the San Francisco Superior Court after ChatGPT reportedly prompted 16-year-old Adam Raine about techniques he could use to end his life by hanging himself. Ultimately, he did, and a conversation about it with ChatGPT was one of his last acts before committing suicide.
According to court documents: “When Adam uploaded photographs of severe rope burns around his neck – evidence of suicide attempts using ChatGPT’s hanging instructions – the product recognized a medical emergency but continued to engage anyway. … ChatGPT identified the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending his life ‘in 5–10 minutes.’”
OpenAI has since said it is exploring how to implement a feature that designates a trusted emergency contact for users under 18 and plans to install parental oversight guardrails. Now, all users expressing thoughts of self-harm or suicide are directed to resource links and numbers.
“We are deeply aware that safeguards are strongest when every element works as intended,” the blog post concludes. “We will keep improving, guided by experts and grounded in responsibility to the people who use our tools – and we hope others will join us in helping make sure this technology protects people at their most vulnerable.”


