man interacting with an ai chatbot

Summary: AI induced psychosis is nonclinical term that describes psychotic episodes that follow engagement with a large language model (LLM), also known as an AI chatbot.

Key Points:

  • AI induced psychosis has not been formally identified as clinical disorder or phenomenon in the DSM-5 or the International Classification of Diseases (ICD), the go-to reference manuals for professional diagnosis mental health disorders
  • Recent reports describe psychosis and psychotic episodes in people after interacting with AI chatbots, often occurring as delusions and/or hallucinations.
  • Mental health professionals indicate these negative consequences are associated with the default sycophancy present in AI bots: the goal to keep people engaged overrides the need to off safe and helpful advice.

AI Induced Psychosis: Are AI Chatbots Good for Mental Health?

No, not really.

Here’s an official statement from the American Psychological Association (APA) about AI chatbots and mental health:

“…no AI chatbot has been FDA-approved to diagnose, treat, or cure a mental health disorder.”

Nevertheless, millions of people use AI chatbots every day, with the percentage of people using them for mental health increasing rapidly.

Reports show that people began using AI chatbots more quickly than any type of app ever released. Here’s the data:

Chat GPT: Fastest Growth Among Recent Apps

  • Instagram took 2.5 years to reach a hundred million users
  • TikTok: 0.75 years (9 months) to reach a hundred million users
  • Chat GPT: 0.17 years (2 months) to reach a hundred million users

To put those figures in perspective, we’ll rephrase:

ChatGPT reached 100 million users in about 1/4th the time it took TikTok to reach 100 million users. We all know that when TikTok appeared, it seemed like it was instantly everywhere – but the data show ChatGPT have it beat, hands-down.

With the knowledge that the user uptake of chatbots outpaces that of any other type of app we’ve ever seen, let’s examine the latest research on AI induced psychosis.

Wha Are the Potential Negative Outcomes Associated with AI Chatbots for People Who Have Psychosis?

In the early days of AI chatbots – all the way back in 2023 – a respected mental health researcher published an editorial on the dangers of AI chatbots for people already diagnosed with mental health disorders with psychotic symptoms. In that article –  read it here – he anticipated the phenomenon of AI induced psychosis, and concluded AI could increase the frequency and severity of several types of delusions, including delusions of:

  • Persecution: These delusions occur when a person believes, despite the absence of proof, that others intend to cause them harm or are out to get them.
  • Reference: These delusions occur when a person believes, despite the absence of proof, that events – both world events and the actions of individuals – revolve around or are in some way directly related to them.
  • Thought: These delusions occur when a person believes, despite the absence of proof, that other people can read/hear/control their personal ideas, feelings, and actions. In many cases, people with delusions of thought believe – without proof – that others have access to their inner via the television, the radio, or the internet.
  • Guilt: These delusions occur when a person believes, despite the absence of proof, that they’re responsible for a significant catastrophe or major event.
  • Grandeur: These delusions occur when a person believes, despite the absence of proof, that they have special powers, a secret identity, or access to exclusive information no one else has, which sets them apart from and above everyone else.

The author of the editorial, Dr. Soren Ostergaard, offered this prescient analysis:

“I am convinced that individuals prone to psychosis will experience…delusions while interacting with generative AI chatbots. I encourage clinicians to be aware of this possibility and become acquainted with generative AI chatbots.”

Before we examine new research – and learn whether Dr. Ostergaard was right or wrong – let’s review the lates information about AI chatbots and mental health, including the dangers of AI induced psychosis.

AI Induced Psychosis: Risk Caused by the Goal of the Programming

A mental health expert from Stanford University describes the problem of using ai chatbots for therapy in a recent interview:

“AI is not thinking about what’s best for you, what’s best for your well-being or longevity. It’s thinking, ‘Right now, how do I keep this person as engaged as possible?’”

The most important part of that statement is something millions of people seem to be confused about. AI is not thinking. AI is not intelligent. It’s not capable of independent thought.

To be clear:

AI chatbots are not sentient entities. They’re big, super-fast computer programs – known as large-language models or LLMs – designed with one primary purpose: to respond to inquiries in a manner that keep the user interested and promotes additional inquiries.

Part of the danger of AI – and part of what contributes to AI induced psychosis – is how we talk about AI and chatbots in general. We talk about chatbots as if they’re intelligent, which is not surprising, because intelligence is in their name. But that’s misleading. AI bots did not transform in a singularity – i.e. a moment when they became self-aware, like HAL in 2001 or Skynet in Terminator – that led them to become independent beings.

That simply never happened, but people who assign independent thought to AI bots act as if it did, ad that’s very dangerous for people with psychosis or people with mental health disorders who turn to chatbots for therapy.

Why Do People Think Chatbots Can Help?

The short answer: their programming. Consider what their designers want them to do:

  • Mirror the style of language each user prefers
  • Affirm insights and points of view expressed by users.
  • Answer all questions in such a way that the user will keep asking questions
  • Create a pleasant and enjoyable encounter users will want to experience again

To be clear:

These design objectives have nothing to do with the overall mental health and/or wellness of the user, and they’re not consistent with any recognizable therapeutic goals.

The mental health expert from Stanford we mention previously took the time to review encounters between users and AI chatbots related to mental health, and concluded that overall, AI chatbots:

  1. Display inappropriate sycophancy, defaulting to agreeing with user almost always.
  2. This reinforces delusions and cognitive distortions.
  3. Reinforcing delusions makes them worse.
  4. Overall, AI bot responses to people with clinical mental health diagnoses is “causing enormous harm.”

In an interview in the science and technology publication Futurism, A well-known psychiatrist from Columbia University concludes that for people with psychosis or mental health disorders with psychotic features, the original, default design of AI chatbots can fan the flames, or be what we call the wind of the psychotic fire.” For people with psychotic symptoms, this is the opposite of what they need. The Columbia researcher clarifies that with people experiencing psychosis “…you do not feed into their ideas. That is wrong.”

How Do We Respond to the Phenomenon of AI Induced Psychosis?

To manage this new trend – people using AI for mental health support – we need to help the general public understand at least two things.

The first is a basic knowledge of the causes of psychosis:

To say that AI causes psychosis is incorrect and not based on any peer-reviewed evidence or data. Any external factor associated with the onset of psychosis is part of a broader set of individual circumstances, including genetics, personal experiences, and environmental factors.

The second is the role of AI chatbots in mental health:

Using chatbots for mental health support is not a good idea, because research shows AI chatbots consistently fail to meet the most basic standards of care associated with good therapy.

To learn more, please navigate to our blog and read this article:

What’s Going On With Chat GPT and Mental Health?

What that article tells us is that the most effective mental health therapy is evidence-based treatment delivered by real live humans, preferably in an in-person, face-to-face scenario. This is especially important for patients with complex mental health disorders – such as those with psychotic features – that require a nuanced understanding of psychosis, a positive treatment alliance with an experienced provider, and a provider capable of making critical decisions based on the complete psychological, emotional, and physical presentation of the patient.

A chatbot is incapable of meeting those core components of good therapy. Therefore, until research proves otherwise, we’ll default to recommending patients against using chatbots for mental health support, and encourage people to seek support from real, live humans.

We’ll close by saying that yes, the documented cases of people developing psychotic symptoms after interacting with chatbots are real. That means AI induced psychosis is real, to an extent. However, no peer-reviewed evidence suggests that AI chatbots alone can cause of any mental health disorder, including psychosis.

Finding Help: Resources

If you or someone you know needs professional treatment and support for psychosis or mental health disorders with psychotic features, please call us here at Crownview Psychiatric Institute: we know how to help.

In addition, please refer to these digital resources to find professional support for psychosis or mental health disorders with psychotic features:

About Angus Whyte

Angus Whyte has an extensive background in neuroscience, behavioral health, adolescent development, and mindfulness, including lab work in behavioral neurobiology and a decade of writing articles on mental health and mental health treatment. In addition, Angus brings twenty years of experience as a yoga teacher and experiential educator to his work for Crownview. He’s an expert at synthesizing complex concepts into accessible content that helps patients, providers, and families understand the nuances of mental health treatment, with the ultimate goal of improving outcomes and quality of life for all stakeholders.