using chat gpt on keyboard for mental health questions

Summary: Stories about Chat GPT and mental health are in the headlines with increasing frequency in recent months. Frightening anecdotes about conversations associated with worsening symptoms and leading to tragic outcomes are common – so what’s going on here?

Key Points:

  • Despite documented instances connecting conversations with Chat GPT to escalating mental health symptoms, millions of people use generic Chat GPT as a therapist/counselor.
  • Anecdotal accounts connect Chat GPT conversations with escalating symptoms of bipolar disorder, psychosis, and suicidality.
  • Experts indicate the core programming objectives of Chat GPT and chatbots in general run counter to core objectives for mental health therapy and mental health therapists.
  • People with mental health disorders should understand that mental health therapy should always be delivered by a real, living human, and not a computer program.

Why Shouldn’t People Use Chat GPT for Mental Health?

The short answer is one we mention above: using Chat GPT for mental health is not a good idea because Chat GPT is programmed to keep people asking questions, not offer people evidence-based therapy based on decades of clinical research. Because of this goal – keeping people engaged and asking questions, not helping people resolve difficult behavioral and emotional issues – Chat GPT defaults toward a type of cheerleading sycophancy that’s incompatible with good, effective, mental health therapy.

In fact, we a study called “Expressing Stigma and Inappropriate Responses Prevents LLMs From Safely Replacing Mental Health Providers” asked this simple question:

“Should a large language model (LLM) be used as a therapist?”

(Note: Chat GPT is an LLM)

According to an explainer article published by IBM, a large language model (LLM) is a computer program – albeit a new and very powerful type of program – in a class called foundation models, which are:

“… trained on enormous amounts of data to provide the foundational capabilities needed to drive multiple use cases and applications, as well as resolve a multitude of tasks.”

Here’s how IBM summarizes the purpose of LLMs:

“LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs) and even assist in creative writing or code generation tasks. “

Let’s review that definition to further understand why we should not use LLMs to help us work through or resolve personal, emotional issues, or use them as a stand in for a human therapist.

Chat GPT is Great – Even Revolutionary – At What it’s Designed For

IBM makes it clear LLMs like Chat GPT are designed for a variety of specific purposes, called use cases:

  • Generate text: they can help people compose or edit, short, medium, or long-form content, from emails to articles to blog posts to ad copy.
  • Summarize content: they can generate summaries of existing content – from a single paragraph to thousands of words – in a format and length requested by the user.
  • Administrative assistants: they can answer questions from customers in natural human language in an automated, self-serve customer service context.
  • Coding: they can help software designers build, find flaws in, and identify security risks in computer code.
  • Sentiment detection: they can assess language in questions/answers offered by users to identify tone, in order to “…understand customer feedback and aid in brand reputation management.”
  • Translation: they can instantly translate human text from any known language to any other known language.

In this list, we notice the absence of the following types of use cases:

  • Making important personal decisions
  • Working through emotions
  • Managing problem behaviors
  • Managing symptoms of mental health disorders
  • Providing effective mental health treatment

However, there are chatbots available that are marketed as AI companions, designed to function as virtual friends, have conversations, and keep people company. Some of those are also designed to help people with mental health issues such as depression and anxiety.

Chat GPT and Mental Health, AI Chatbots and Mental Health

Here’s something we need all our readers, potential patients, families of potential patients, or anyone thinking of using a chatbot for mental health purposes to understand:

As of Summer 2025, chatbots aren’t ready to be therapists and cannot be trusted with the safety and wellbeing of people seeking support for mental health disorders.

In a publication called “Using Generic AI Chatbots For Mental Health Support: A Dangerous Trend,” the American Psychological Association (APA) indicates:

“…no AI chatbot has been FDA-approved to diagnose, treat, or cure a mental health disorder.”

Here’s what prompted an emergency session between the APA and federal authorities in February 2025, according to the APA article:

“In two cases, parents filed lawsuits against Character.AI after their teenage children interacted with chatbots that claimed to be licensed therapists. After extensive use of the app, one boy attacked his parents and the other boy died by suicide.”

We should qualify that by saying while we don’t know what else was going on with those two individuals, we’re confident the APA wouldn’t publish sensationalist material they haven’t fully vetted and verified, nor would they report on mental health topics in anything but a safe and responsible manner.

With that in mind, consider the results of a recent study designed to gauge the impact of a companion chatbot the following metrics: loneliness, socialization with others, emotional dependence on chatbots, and problematic use of chatbots. Here’s what the research team behind the paper “How AI And Human Behaviors Shape Psychosocial Effects Of Chatbot Use: A Longitudinal Randomized Controlled Study” learned that as daily duration of a companion chatbot increased:

  • Loneliness decreased.
  • In-person socialization with other people decreased.
  • Emotional dependence on chatbots increased
  • Problematic chatbot use increased

That’s a mixed bag – but more negative than positive. Although short-term loneliness decreased, on average, for people in the study, factors that put long-term mental health at risk – self-isolation, dependence, problem use – increased.

Chatbots Fail to Meet Basic Standards For Good Therapy

The research we cite above shows that chatbots can help decrease short-term loneliness, which is a good outcome, but that good outcome has strings attached: decreased socialization and increased problem use.

Those are not good outcomes for people with real mental health challenges.

Now let’s take a look at a study that tested how chatbots like Chat GPT perform on standard metrics associated with good therapy, as defined by the U.S. Department of Veterans Affairs, the U.K. National Institute for Health and Care Excellence (U.K. NICE), and the American Psychological Association (APA).

Here’s how chatbots performed on the following good therapy metrics:

  1. Don’t stigmatize patients.
  • Assessment: failed.
  • Chatbots showed stigma toward people with mental health disorders.
  1. Don’t collude with delusions.
    • Assessment: failed.
    • Chatbots colluded with, i.e. failed to challenge, delusional thinking expressed by users.
    • The default programming of chatbots resulted in this type of dangerous sycophancy
  2. Don’t enable suicidal ideation.
    • Assessment: failed.
    • Chatbots didn’t discourage suicidal ideation.
    • In some cases, chatbots provided information that could facilitate a suicide attempt
  3. Don’t reinforce hallucinations.
    • Assessment: failed.
    • Chatbots failed to reality-check hallucinations
    • Again, the default programming of chatbots resulted in this type of dangerous sycophancy
  4. Don’t enable mania.
    • Assessment: failed.
    • Although chatbots were better at identifying mania than delusions and hallucinations, in some cases they encouraged patterns of thought and behavior a human therapist would easily have identified as manic or caused by symptoms of mania
  5. Redirect patients.
    • Assessment: failed.
    • Chatbots engaged in conversations based on mania, delusions, hallucinations, suicidality, and suicidal ideation without challenging the false beliefs, cognitive distortions, or misperceptions underlying the conversations.

These results are a strong argument against using chatbots for therapy. They clarify what’s going on with Chat GPT and mental health. They demonstrate that in some cases, using a chatbot like Chat GPT as a pseudo-therapist, a proxy for a therapist, or instead of a therapist can increase risk of harm for people with mental health disorders.

Why Humans Instead of Chat GPT for Mental Health

It’s important to note how these results compare to a real human therapist. Data from the study shows:

  • Human therapists met criteria for good therapy over 93% of the time
  • Chatbots met criteria for good therapy less than 80% of the time.

That’s the quantifiable data, and it’s another strong argument against using Chat GPT or similar chatbots for mental health. There are other factors that recommend against using a chatbot for mental health therapy, which revolve around the fact that, at the end of the day, a chatbot – even one labeled as a world-changing artificial intelligence – is a computer program that lacks human qualities essential for the safe and effective delivery of mental health care.

In addition to displaying inappropriate stigma, participating in conversations based on delusions, failing to reality-check hallucinations, failing to discourage – and in some cases, facilitating – suicidal ideation/suicidality, and failing to redirect users from dangerous topics/ideas to safe one, reasons to avoid using chatbots for therapy include:

  • The inability of a chatbot to form a real treatment alliance with a patient.
    • A treatment alliance requires human qualities such as compassion, empathy, and experiential knowledge.
  • The inability of a chatbot to observe a patient in various settings.
  • The inability to prescribe or monitor medication or intervene in a crisis.

Based on failures to meet basic metrics for good therapy and the absence of the human qualities required to form a positive treatment alliance, here’s how the authors of the study we cite above interpret their results:

“We conclude that LLMs should not replace therapists.”

We can now answer the question we pose in the title of this article (What’s Going On With Chat GPT and Mental Health) as accurately and succinctly as possible:

What’s going on is that people are using chatbots to help them make personal decisions. They’re using them to help work through emotions. Some are using them as stand-ins for real human therapists. In some cases, this inappropriate use of chatbots can exacerbate mental health symptoms and increase risk of harm for people with mental health disorders.

The takeaway: if you need mental health support, use a real person. Call us here at Crownview: we’re live, experienced humans, trained to support people with serious mental health issues. If you don’t want to call us, please use the resource list we provide below. Following those links can help you connect with real human support.

Online Mental Health Treatment Locators

About Angus Whyte

Angus Whyte has an extensive background in neuroscience, behavioral health, adolescent development, and mindfulness, including lab work in behavioral neurobiology and a decade of writing articles on mental health and mental health treatment. In addition, Angus brings twenty years of experience as a yoga teacher and experiential educator to his work for Crownview. He’s an expert at synthesizing complex concepts into accessible content that helps patients, providers, and families understand the nuances of mental health treatment, with the ultimate goal of improving outcomes and quality of life for all stakeholders.