BHB VALUE: Harnessing AI and Behavioral Health: Personalization, Efficiency, and the Future

This article is sponsored by Sunwave. This article is based on a discussion with Elie Levy, founder and CEO of Sunwave and Dr. Brian Wind, chief clinical officer of Regard Recovery/JourneyPure. This discussion took place on March 12, 2024 during the BHB Value Conference. The article below has been edited for length and clarity.

Behavioral Health Business: How has AI been used for personal patient care?

Dr. Brian Wind: In my opinion, one of the greatest opportunities regarding AI centers around treatment planning by way of clinical data that we gather as part of our front-end patient assessment via typically something like a biopsychosocial model. There’s data that can be gleaned from that AI can then use to formulate a master problem list from which treatment plans can be developed. Those treatment plans can also be updated based on other good data from other sources along the clinical pathway. I think it also can be applied to certain types of interventions and modalities that are selected based on good data. I believe that’s exactly what Elie has done in developing his MARA tool.

Advertisement

Elie Levy: We released a product back in October, called MARA. At this point, what it is doing is helping clinicians create notes. At this stage, I believe that we are close to 10,000 notes generated per week, which is pretty impressive. We have a new version that we are to release later this year, which is actually targeting what Dr. Brian was talking about a moment ago.

One of the key characteristics of our platform is that it is a true platform. It’s just one system that drives the entire spectrum of treatment. 360 degree view of the patient journey through treatment from the initial phone call all the way through treatment and all billing and collections and then even after care. What that allows us to do is to have a really good set of data. If you read anything about AI today, the key for a successful AI project is to have good data. We have a pretty extensive database that we have collected for the last 10 years. With that, we can really leverage that to do amazing things, including the creation of personalized treatment plans for patients.

You can actually interact with our AI agent in a chat format. You can type, “Can you create a treatment plan for the patient that I am looking at right now?” It generates a treatment plan. Then the clinicians are responsible to read it, make sure that it makes sense, they can make the modifications that they want to, and they can go on with it. This is just one of the things that we are doing right now. It’s a very interesting and exciting technology.

Advertisement

BHB: With behavioral health, I think so much happens on a case-by-case basis. No two patients are suffering with the same problems. How does AI handle the varying complexities of individual patient profiles?

Dr. Wind: Good data in equals good data out. Keep it positive. There’s the human condition involved, even in the utility of AI, right? Good data has to go in for us to get good data out. That includes clinical data that’s collected, whether it’s the medical side, the history and physical and psychiatric consultation, or clinical data collected via biopsychosocial assessment. All of that data has to be good data that is individualized to that patient in order for good data to be outputted in such a way that it’s meaningful for us in providing them quality care.

Levy: There are different types of AI agents, but I’m talking more specifically about the solution that we have. As I was saying before, it is sitting inside the platform. This is not a plug-in. This is not a system that is integrated from the outside that you need to build APIs and integrate the data flow for that. I think that’s one of the things that if I was in the clinical side to implement a product like this, that’s one of the things that I would look into. The fact that the AI is embedded within the platform, it’s part of it. It lives with it. It has access to all of the data that has been collected.

As the patient is going through treatment, the patient is interacting with a lot of different people within the facility. We have the BHT team that is the first one that is taking the patient in. Sometimes the nursing team is working in conjunction with the BHT to do the intake and the admissions. Then you get the clinical team, the medical team, the nursing team. There are the case managers. That’s a lot of different perspectives on what is going on with the patient. Having the ability to be able to consolidate all of that data is extremely difficult. Without having this type of tool, it takes an enormous amount of time to really be able to understand all of the aspects of what is going really with the patient. Because there are different perspectives. Each specialty will have a different view.

I know that AI has the ability to consolidate all of those different views into what we are calling behavioral integration. That’s a term that we are using for this. It’s the unification of all of these different types of perspectives into having a holistic view of what is really going on with the patient. I think that this is going to drive value in terms of the service. It’s going to drive better outcomes because we will have a better understanding of what is really going on with the situation with each specific individual.

Every person is different. Yes, somebody can have PTSD, but this is going to be different from another person that is going to have PTSD. There are different factors that are influencing the triggers for that patient to have the situation that the patient is in. Being able to unify and have a holistic view on what is going on, I think that is going to be crucial for us to be able to deliver better care.

BHB: There’s been some crazy headlines about AI. What are some of the safeguards in place to ensure that AI-driven personalization respects patient privacy and consent?

Dr. Wind: I think compliance and adherence to regulatory standards that are existing and ones that are yet to be developed, quite frankly, has to be of the highest degree. There have to be systems of checks and balances in place to ensure a system of accountability. I was part of a conversation recently, interestingly, in the senior living space where they were talking about the use of AI to document ADLs for folks that are residents in the senior living space. That sounds a little bit scary, doesn’t it?

If your grandmother is in a nursing home, there’s a camera in her room that documents that she’s brushing her teeth and getting dressed, all of those activities of daily living. Lo and behold, that conversation evolved over into the behavioral healthcare space as well. You might get questions from stakeholders in this that say, “Wait, you want to put a camera in my grandmother’s room at the nursing home? Who has access to that data? What are you doing with it?” AI can then, in turn, ideally document this, and I think that’s such an incredible thing. I think that’s one of the workflow pieces that we’ll talk about as well that streamlines things is then, in turn, documentation of those types of things that are critical for not only patient care but for reimbursement.

Levy: One of the most important things that I was concerned about when I was starting Sunwave 10 years ago was specifically the topic that you are asking about, which is security. We have privacy issues. When we are dealing with healthcare, this is an industry that is heavily regulated. I’m sure that everybody has heard the word HIPAA. That shouldn’t be a surprise to anyone. It’s something that is extremely important. We want to make sure that all of the data of every individual is properly protected, and that only people that should have access to that data in order to be able to complete their functions, are actually having access to that.

It is extremely difficult to control that in general terms, even without technology. Before we had EMRs, there were a lot of different mechanisms that we were using to protect this data. As the EMRs have been developed, there are a lot of different things that we need to care about. One of the most important things is that the data has to be encrypted in all of its forms. It doesn’t matter that it’s sitting in a hard drive to simplify the technical terminology, or if it is flying over the network. At all times, it needs to be encrypted, which means it needs to be encoded, but if there was to be a hacker trying to snoop through the wire, they cannot actually see what is there.

When you are implementing an AI project, this AI agent, depending on where it lives, is going to have access to this data. One of the reasons why I thought that it would be super important for us to have our own AI solution in-house was precisely that. We don’t have the challenge of having to have a special agreement with a third party that they are hosting their own model of AI that is running their own AI agent, so you don’t have to worry about the data flowing into some other organization. From the perspective of the clinician or the behavioral health practitioner’s view, it’s very important that the vendor that you are dealing with is following these regulations.

One thing that we are doing is we are building what we call a sandbox, and that sandbox is where the AI agent is living in. We are doing everything that we can to control the AI agent to keep it constrained to that sandbox. What that does is it allows us to partition the data of our clients, and the AI agent, when it’s interacting with that specific client of ours, is only having access to that sandbox where the data is supposed to be.

BHB: Can you share how AI has streamlined operational processes in your organization? It’s probably a Dr. Wind question. Particularly if you could hit on reducing administrative burdens on actual clinicians.

Dr. Wind: The documentation piece is so key in this conversation. In my view, in our space, there are three big links in the chain of successful operation. There is service delivery. There is documentation of that service delivery, such that the third link, revenue cycle management, as we call it, the coding, billing, collecting, such that we can get paid for what we do, is in place. All three links of that chain have to be in place in a meaningful way.

That chain is often broken by the second link, the documentation piece. We have an inordinate amount of quality assurance going on in the company I’m with, for example. Multiple layers of it in a team of a couple dozen people that do that and that alone for their job. We do a great job with the service delivery part, but the documentation part, whether it’s the quantity or quality of it, is often lacking. I think there can be a huge amount of assistance in that, such that the third link can be in place and we can get paid for what we do. To me, the low-hanging fruit here in terms of opportunities is around documentation.

Levy: I agree with that 100%. One of the first use cases that we worked on was on helping with compliance. Some clients of ours came to us and said, “We are having a major problem here. We have really good practitioners that are doing unbelievable work in engaging with the patients and getting the patients to be better, but they are not good writers.

A person is having a group session. This guy is an unbelievable group facilitator but is doing 12 patients a group, three to five groups a day. At the end of the day, he needs to create more than 50 or 60 notes. That’s not reasonable for a person to actually, after doing so many hours of intensive group facilitation and being engaged, how do you expect that person to write a pretty extensive and well-written note?

What our clients came to us saying is that these people, they have a big Word document, and they have like five or six paragraphs, and they are just copy-pasting that paragraph from one group’s note to another. Then the insurance company is asking for medical documentation. “I want you to send me the group notes for these 40 patients over this period of time,” and boom. There is a copy-paste. The insurance company just has an excuse to stop paying for this facility. They are going to do an audit. In the worst-case scenario, this is a true story, and I’m sure that some of you are going to relate to it. There is a clawback, $3.8 million, and that can be devastating.

That’s going to stop the operations of the facility. Instead of worrying about being able to help more people, we are now stuck trying to fight with an insurance company to actually get reimbursement and be able to get paid.

The bottom line is this copy-paste issue is a major problem that a lot of us have been facing. The AI agent that we built was specifically built to target that. We are collecting the information that the clinicians are selecting on their notes, and we are helping write a powerful, well-written, unique, personalized note for every patient, and that solves the problem, right? Instead of trying to identify who is doing copy-paste, even if they are not copy-paste, they are just one-liners, the patient attended the group. When you send that to an insurance company, what do you think they are going to do? We put a lot of work on making sure that we are actually fixing this compliance issue, and with that, it’s been very successful.

Like I was saying before, we are close to generating more than 10,000 notes per week right now. I’m very proud of that. The other thing in terms of streamlining operations, which is the next thing that we are targeting after, is a utilization review process. When you are on the phone with an insurance company, they are trying to find an excuse to not extend the treatment for the patient. “Look, we have a doctor here. The doctor is looking at the patient. The patient is in a taper. The patient needs to continue treatment, but we have a person in the UR team that is making 100 phone calls every hour and is trying to get the treatment extended, and it’s difficult.” They put all of the blocks that you can imagine.

The next thing that we are targeting is specifically this part, which is the concurrent reviews. For this, we are using what I was talking about before, which is behavioral integration. It’s the perspective of multiple different people in the facility. We are consolidating that information, and the UR representative can just chat with MARA, our AI agent, and ask the questions that the insurance company is asking over the phone, and MARA will just provide you the answer. Again, I wish that I could actually connect to the TV, but we had a logistical issue with that, and we could not connect with it, which is fine.

BHB: The behavioral health industry has ever-evolving regulations. How can AI assist in adhering to these regulations?

Dr. Wind: One of the examples would be related to documentation. I want to just throw a point about regulatory requirements. There’s a cautionary note with this. Each of us as a human being has a diagnosis list, whether we like it or not, and the first thing on that diagnosis list is a human condition, right? We will fall back on human ways every time because we’re human beings. It is what it is. It’s okay.

As part of that, could there be an over-reliance on things like AI? Potentially. I don’t want to use the word laziness, but I’ll just translate that into an over-reliance one. Could there be the possibility of human error? Again, good data might be lacking at times because of human error and those types of things. I think there need to be safeguards in place that are consistent with regulatory requirements about how we use AI. There still has to be a human being with his or her hands on the wheel, if you will, and in control of this.

I don’t think Elie would advocate for somebody who has to do 50 or 60 notes at the end of the day and is trying to keep up with a busy documentation workflow to simply click an AI button, let MARA do this for them, and then turn around and walk away, crossing their fingers and hoping someday MARA can even sign their notes for them and those types of things, right? Those notes have to be reviewed for quality. They have to be reviewed for applicability to the patient’s individualized case, and they have to be revised as such in response to that in order to use this responsibly.

Regulatory requirements, I think, probably still need to be developed related to some of those things in terms of how we use AI in an appropriate and responsible way.

Levy: We have thought thoroughly about this concern, and we have different solutions for it. One of them is that before the clinician or the practitioner is to sign the note, the system is asking the person, the user, to confirm that they have actually read through the note, that they know that generative AI might not be 100% accurate, and that they actually need to make sure that the note reflects their sentiment about what really went on in the session. That’s one of the things.

It’s like you have two signatures. One is the signature to accept that you have actually read through the generated note, and the second one is the one that you are really using to complete the note.

The other mechanism that we are incorporating into the product is that the AI is going to insert certain strings of text that don’t make any sense. Until the person doesn’t delete those strings of text, they cannot actually sign on the note. This string of text is very simple. It just says, “You need to delete this.” We are inserting that in different places in the note. If you have a five-paragraph note and that is inserted in two places and you don’t know which ones, you are forced to really read through the note, you need to delete that. Now you can actually accept that you read it and the system knows that that’s the case.

We are definitely working on that. It’s super important to make sure that we have the human element into this. Somebody sometime asked me, “Do you think that the AI is going to actually replace the therapist?” My thought is I don’t see that in the foreseeable future. I don’t want to predict the future. I think that when we try to predict the future, we are going to make mistakes. It’s like my analogy that I use for that, it’s like trying to play chess on a board that has no pieces. Somebody says, “Move the E4 and then F8.” You know what? I am lost already. I cannot follow that. I think we should better just look at the board with the pieces on. Let’s see. “Move my move.”

What I see that is going to happen sometime soon is that we are going to have some sort of an earpiece on the person that is actually doing the therapy session or the group facilitation. The AI will be listening to the session. As the therapist is engaging with the patient, the AI might trigger some thoughts for the therapist on, what should we look into? “You are moving away from a CBT type of clinical proven method. Maybe you should ask this type of question.” Those types of things I see that are very viable in the short term. Short term for me is not maybe the same as for you, so I will not put a quantification on that. I will just say sometime that we are going to be able to actually enjoy seeing, I believe.

BHB: What concerns have payers had with AI-supported documentation, accuracy of this documentation as it drives payment and access to benefits?

Dr. Wind: I can see in the future that AI would not simply be a tool used by providers only. Make no mistake, all due respect to payers, they will likely produce their own AI tools to use as well. It is a little bit of an interesting thought, right? Food for thought. Two warring factions of AIs battling against each other to reimburse or not to reimburse. It just creates this picture of a universe. You go, “Oh, my gosh, where are we headed?”

I think Elie has spoken to some of the points that I think would be major concerns for payers. Are we keeping our hands on the wheel in terms of how we use AI? Are we using it responsibly? Are we allowing AI to “do our job for us” when it comes to some aspects of service delivery and many aspects of documentation? There are safeguards that I think they can put in place, not the least of which might be development of their own AI tools to do checks and balances related to that type of thing.

Right now, though, I think there are similar concerns in place because Elie used two of the words that I dislike hearing the most when I’m speaking to clinicians, which are copy and paste. We work in a busy industry. The workflow is incredible. It’s fast-paced. We’re in a managed care climate where there’s an uphill climb to keep up with all of the checkboxes we need to check off. The workflow often involves these templates that are used where we copy and paste so that if you look across a different group of patients treated by one clinician, the notes look almost identical. That flies in the face of individualization of patient care, treating each person as an individual human being.

We’re already struggling with a lot of that. I think that this will be next level in terms of how we interact with and engage with payers around use of AI in service delivery.

Levy: I’m going to reference the first part of Brian’s response. I think of that as like the Star Wars of reimbursement. The AI from the facility to the AI of the insurance fighting each other to try to get reimbursement or denied.

One thing that I can say is that there was a paper that was published not too long ago. I can’t find it, I am sure. I don’t remember exactly who the author was. They actually proved that it is impossible for any software to detect if something was written by an AI or not. This was not done in the space of health care. This was not done in the space of behavioral health. This was done in the context of college work. They did a research project to try to find out if there is any way to detect if a student actually wrote it or if the student used an AI to write it. It’s impossible. This paper is making the points of why it is.

That should give us a sense of relief in terms of, can the insurance see that we are using AI or not? At the same time, the controls that we have put in place, which I was talking about before, to ensure that the person is actually reading the note and is actually completing the note properly, should make us feel comfortable with it as well. The other thing that I wanted to add on top of that is if we were not to use this technique, the other option is to do copy-paste. That is not an idea that I am having here. It’s something that is real.

The amount of work that goes into documentation, the only option that they have is to really do some sort of copy-paste and edit that. With the generated from the AI, what it’s doing is actually creating a compliant note that was built to make the note compliant based on what the clinician has done prior in the session. The insurance company will have a really hard time arguing that a note is not valid because they used AI to help create the note.

This is the same as, you can make the case, when you are typing a text message and you get an auto-complete. Are you really typing that text message or is it a text message that the AI that is doing the auto-complete for you wrote for you?

Everybody has to get up to speed with what technology is doing today. There is a statement that says, AI will not take your job. A person that is using AI will take your job. Knowing that, I think it is a no-brainer. We need to get on the train and continue working with the new tools that we have available to us and make this a better place. All of the work that we are all doing in behavioral health is really important. I think it is important to keep up.

Sunwave’s integrated software solutions give you total access to all aspects of your operation: EMR, CRM, RCM, and alumni support. Information remains secure, HIPAA-compliant, and data-driven insights allow for better billing processing and collections. To learn more, visit: https://www.sunwavehealth.com/.

Companies featured in this article: