AI Therapy: How Chatbots Are Transforming Mental Healthcare

Chatbot therapists are gaining popularity. But can they compete with the real thing?

Written by Ellen Glover
Published on Apr. 30, 2024
AI Therapy: How Chatbots Are Transforming Mental Healthcare
Image: Shutterstock / Built In

The launch of ChatGPT transformed how we interact with artificial intelligence, making conversations with chatbots feel like second nature — whether it be for customer service, personal assistance or entertainment. Now, millions of people are turning to AI for one of the most personal tasks: therapy.

What Is an AI Therapist?

AI therapists, or therapy chatbots, use artificial intelligence to provide mental health support through automated conversations and therapeutic exercises. While designed to mimic human interactions, AI therapists are not intended to replace human therapists, as they lack the emotional intelligence and nuanced understanding of trained mental health professionals.

With a global shortage of therapists and a surge in people who need their help, these AI-powered bots are meant to fill the gap, offering cheap — or free — mental health assistance. 

And lots of people are using them, with apps like Woebot and Youper garnering more than a million downloads apiece. AI chat therapists have been rolled out everywhere from a maternity hospital in Kenya to refugee camps in Syria. In the United States, some apps have even been recognized by the Food and Drug Administration (FDA) and are being clinically tested.

While therapy chatbots may be an accessible tool for managing mental health, concerns remain about how effective — and ethical — they are to use when it comes to addressing the complex emotional needs of humans.

Related ReadingTop Mental Health Apps to Know

 

How Does AI Therapy Work, Exactly?

Therapy chatbots are used to support people dealing with mild depression, loneliness, anxiety and other mental health issues. When people come to them with a given problem or stressor, these bots respond in ways a real therapist might — they ask questions, suggest coping mechanisms, set goals and offer to hold users accountable. In some cases, they use AI to track, analyze and monitor the person’s mood, mimicking a human therapist. 

Despite their capabilities, these chatbots are not a substitute for real therapists. They can’t make diagnoses or prescribe medication, and they aren’t held to the same ethical and legal standards as licensed healthcare professionals. But they can provide a sort of “simulation” of what is typically done in talk therapy, said Chaitali Sinha, senior VP of healthcare and clinical development at AI therapy app Wysa. “These chatbots can actually talk to you, and help you work through the thoughts that you’re having.” 

Chatbots are also used in conjunction with conventional therapy. They offer personalized advice and coping strategies — such as breathing exercises, journal prompts and words of affirmation — to patients in between sessions with their human therapists. The idea is that this additional care can not only help patients better manage their mental health, but it can also serve as an early risk detection tool for clinicians, helping them to identity potential issues before they become more serious. 

“It’s a way to have continuity, especially if the therapist also has access and can see the times that you're talking to it and what the conversations are,” said Jessica Jackson, a licensed psychologist based in Texas. “I think it can be used as an adjunct to therapy. I don’t think it’s a replacement for therapy.”
 

Is AI Therapy Effective?

Research on AI therapy’s effectiveness is limited, but early findings suggest that chatbots can complement conventional therapy and help reduce symptoms of depression, anxiety and stress (at least in the short term).

Another study indicates that processing trauma and emotions through writing is an effective coping strategy, which may suggest that conversations with a chatbot could be beneficial — even if it doesn’t perfectly replicate the experience of therapy.

That said, the use of chatbots in mental healthcare is still in its early days, and more research is needed to better determine how effective AI-assisted therapy actually is long-term.

“Until we have really strong data that an experience with AI is as good or better than traditional therapy, I think we need to be very careful about recommending it as an alternative,” Brent Franson, founder and CEO at wellness app Most Days, told Built In.

Meet the Original AI ChatbotWhat Is the Eliza Effect?

 

Benefits of AI Therapy Chatbots

Like human therapists, chatbots can empower users to say what they want without the fear of offending or burdening anyone, Jackson said. “It’s just about me. And I can address my needs without having to worry about what’s happening to the other person.”

Chatbots also offer a unique set of benefits that enhance mental healthcare:
 

Available 24/7 

Even the most dedicated therapist in the world has to eat, sleep and see other patients. But a chatbot is free to talk whenever a person wants, and for as long as they want — whether it be at 2 a.m. when they can’t sleep or during an anxiety attack at a holiday dinner. There are no waiting rooms or appointments, and responses are instant.

 

No Wait Times

Supporters of chatbot therapy argue that it is a practical way to address the soaring global demand for mental healthcare, at a time when there are simply not enough clinicians to go around. 

In the United States, for example, people can wait weeks or even months to see a therapist (unless they have insurance). At the same time, nonprofit Mental Health America estimates that more than half (54.7 percent) of U.S. adults with a mental illness do not receive treatment, totaling more than 25 million individuals.

“We don’t have enough providers for that many people,” Jackson said. “So even if I am interested, I have to be able to find someone who can see me, who has time on their calendar, and I have to be able to afford it. For some people, that knocks them out of therapy completely.”

 

Can Be Customized to Each Patient

The nature of chatbots allows them to provide patient education and support in a conversational way. By gathering context from user interactions, they adjust their messaging to align with an individual’s specific communication preferences, problems and goals. This can help users feel more comfortable and engaged with a chatbot, potentially leading to more open discussions and better therapeutic outcomes. 

For example, a study of 1,200 users of therapy chatbot Wysa found that a “therapeutic alliance” between bot and user formed within just five days. Users quickly came to believe that the bot liked and respected them — that it cared — which led to feelings of “honesty, safety and comfort with Wysa.”

 

Non-Judgmental

One of the biggest obstacles in effective therapy is a patient’s reluctance to be fully honest with their clinician. In one study of 500 therapy-goers, more than 90 percent confessed to having lied at least once.

“It can be scary to say to a human being, ‘I have a drinking problem,’ or ‘I cheated on my spouse,’ or ‘I’m depressed’ — things that you can barely admit to yourself, or that are stigmatizing,” Franson said. “With AI, there’s a little bit less fear.”

With chatbots, users can safely and anonymously discuss their darkest moments and most personal feelings, without fear of judgment from any humans. 

 

Challenges of AI Therapy Chatbots

While chatbots bring many benefits to mental healthcare, they also present several challenges.
 

Rules-Based AI Lacks Nuance

Most of the mental health chatbots out there now rely on what’s called “rules-based AI,” meaning they can only offer responses that have been pre-written and approved by human experts. 

This means that the bots aren’t adaptable, and they can’t spontaneously formulate their own follow-up questions or guide conversations in new directions in the same way a human therapist can. Their rigid programming limits their ability to dynamically engage with users. 

“The chatbot is only going to be able to give what it can give,” Jackson said. “It might be able to give a few coping skills. But it’s not going to actually be able to fully do the nuanced piece of what therapists are going to do.”

 

Generative AI Is Prone to Give False or Harmful Guidance

Some therapy chatbots are powered by generative AI, which means they can generate their own responses. This makes them more difficult to test for clinical validity and safety. It also makes them more susceptible to making things up, or writing harmful messages, which could hinder the therapeutic process. 

“The key worry is primarily from a clinical safety lens,” Wysa’s Sinha said. “[Providers] need to be sure that this clinically approved content is what is being provided to the patient every single time, and won’t take them on deviated pathways or trigger a dangerous response. Those have been the initial worries with generative AI.”

 

Unsuitable For More Serious Mental Health Conditions

Sinha, who is also a trained psychologist, said these chatbots are not equipped to support people with conditions like schizophrenia or eating disorders due to their severity and complexity.

Treating schizophrenia, for example, requires a lot more intervention than just text exchanges. Some symptoms can only really be detected during in-person interactions,  Sinha explained, and the patient themself may not even be able to articulate their symptoms at all, which requires help from friends and family. Medication is usually involved. And the wrong kind of treatment can have “really dangerous” consequences, she said. Chatbots simply can’t handle that level of care.

Many mental health chatbots repeatedly warn users that they are not designed to intervene in acute crisis situations, like suicidal ideation, and will triage people toward more appropriate services when it detects those behaviors.

 

Privacy and Accountability Concerns

In conventional therapy, clinicians are held to a very high standard of care. With some exceptions, they have to keep whatever a patient tells them confidential. If they don’t, they could potentially lose their license.

Chatbots are not held to these same standards. Most of them are categorized as “wellness,” not healthcare, meaning they don’t have to adhere to the same laws — namely HIPAA, which protects the privacy of patients’ medical records and other personal information. And because therapy chatbots don’t explicitly claim to diagnose or treat medical conditions, they aren’t regulated by the FDA.

Some chatbots follow HIPAA, GDPR and other data privacy laws, but most don’t. So all of the highly sensitive information they collect from users is not as tightly protected. In fact, a 2023 survey by the Mozilla Foundation, an independent global watchdog, found that of the 32 most popular mental health apps, 19 were “failing” to safeguard users’ privacy.

“People aren't thinking about that when they talk to a chatbot, and I think those are things that we need to have more regulation about,” Jackson said. “When you talk to this chatbot, who is holding it accountable? And where does the data go? Can it be used for other reasons?”

 

Lack Human Presence and Empathy

Even if mental health apps figure all of these other challenges out, there is one hurdle they will likely never overcome: A chatbot will never be a human, no matter how convincingly it mimics one. 

Chatbots are not sentient, they have no emotions, empathy or life experience. They are incapable of relating to or connecting with people the way a real person can. This can pose a substantial challenge when they are applied in an arena like therapy, which is typically built on human interaction, trust, emotional intelligence and a sense of mutual understanding.

“Feeling that sense of being witnessed, seen or heard by another person — just the fact that there is another person doing that with you and you are feeling a sense of connection to them — has a lot of value,” Sinha said. “That is incredibly hard to replicate, and shouldn’t be done away with.”

Food for ThoughtHere’s Why AI Can’t Solve Your Mental Health Issues

 

Popular AI Therapy Chatbots to Know

Woebot

Described as an “AI-driven digital companion,” Woebot offers support through chat-based conversations rooted in cognitive behavioral therapy (CBT), a goal-oriented approach that helps individuals identify and change negative thought patterns and behaviors. The chatbot is rules-based, so all of its responses have been developed by professionals and are overseen by clinicians. And it uses natural language processing to interpret user inputs and decide what pre-written response to give — whether that be a piece of advice or words of encouragement.

With more than 1.5 million users, Woebot is one of the most popular therapy chatbots on the market today. The company is also HIPAA compliant, and says its user data is safeguarded as protected health information (PHI), meaning it never shares or sells its users’ data.

 

Youper

With Youper, users can screen themselves for common mental health conditions, and then engage in personalized conversations based on CBT and mindfulness techniques. As the chatbot learns more about the user, it fine tunes its responses to better fit their needs. The app also offers a mood tracker, personality assessments, mindfulness exercises, and ways to track their mental health progress over time.

Youper uses a combination of rules-based and generative AI. Its responses are AI-generated, but its underlying large language model (LLM) has been trained on specific clinical data, and has strict guardrails to prevent harmful responses or data leaks, according to the company. A study co-authored by researchers at Stanford University and Youper determined that the app is an effective treatment for anxiety and depression.

 

Wysa

Wysa is designed to help users with mild depression, stress and anxiety, with a particular focus on meditation. The chatbot operates on a rules-based framework, offering clinically validated support and advice through both automated daily check-ins and conversations. There is also an option to connect with a human coach or therapist, who offer 1-on-1 sessions and unlimited messaging between sessions, according to the company.

In an independent clinical trial published in 2022, researchers found Wysa to be effective for managing chronic pain and associated depression and anxiety, and comparable to in-person psychological counseling. Soon after, the company received a Breakthrough Device Designation by the FDA to further develop and test its product. The U.K.’s National Health Service is also using Wysa to help treat depression and anxiety in teens and young adults, including those on the waitlist to see a therapist.

 

Earkick

Earkick is a free tool people can use to get advice and better understand their emotional health. Users can communicate with the chatbot — personified as a bandana-wearing panda — through text or voice memos, and the chatbot responds in real-time using generative AI. 

In addition to the chatbot, the app is especially focused on tracking users’ mental and emotional state, and considers things like voice pitch, sleep duration, menstrual cycle, and context information about their work, family and friends to make mental health assessments. Earkick is also completely anonymous; it does not collect email addresses or demographic data like name, age, gender or location.

 

Elomia

Elomia provides users with mindfulness techniques meant to help them cope with anxiety, depression, loneliness, work burnout, difficult relationships and other common stressors. It also offers guided exercises, daily meditations and a journal feature for users to track their progress.

Elomia claims that after six months of using the chatbot, participants have reported an increased ability to cope with difficult emotions, improved problem solving skills, increased motivation and productivity, improved social skills and much more. And a 2021 peer-reviewed study found that regular usage of Elomia led to a “significant reduction” in depression and anxiety.

Frequently Asked Questions

An AI therapist is a chatbot that uses automated, text-based conversations to provide mental health support. While designed to mimic human interactions, AI therapists are not intended to replace human therapists, as they lack the experience and training of mental health professionals.

No, AI is not sophisticated enough to replace human therapists at this time.

Yes, there are hundreds of AI-powered therapy apps on the market today. Some popular examples include Woebot, Youper, Wysa, Earkick and Elomia.

If you or someone you know is experiencing a mental health crisis, call or text 988. In case of an emergency, call 911 or seek help from a local hospital or mental health provider.

Hiring Now
Click Therapeutics
Healthtech • Biotech • App development
SHARE