texting with an ai chat bot for mental health

Summary: Stories about AI chatbots and mental health are in the news almost every day, often with alarming anecdotes about chatbot interactions exacerbating mental health symptoms. A new study documents another aspect: the impact of chatbot interactions on loneliness and social interaction.

Key Points:

  • Every day, millions of people use AI chatbots for various reasons.
  • Some use them as companions, some use them to seek and find information, some use them to help with work or school.
  • A growing percentage use them for mental health support, to work through personal feelings, or to make important personal decisions.
  • A new study shows how different interaction styles can lead to different psychosocial consequences, some negative and some positive.

Interaction Style, AI Chatbots, and Mental Health

We’ll start with something important that everyone should know:

No AI chatbots have received approval from the Food and Drug Administration (FDA) to diagnose or treat mental health disorders.

If you have a mental health disorder, we encourage you to contact a licensed, qualified, experienced, human mental health professional. To learn more about why, please navigate to our blog and read this article:

What’s Going On With Chat GPT and Mental Health?

The short version of what’s going on is that when people use chatbots as a stand-in for a human therapist, chatbots may offer support and advice that increases risk of harm to people with mental health disorders, up to and including facilitating suicidal behavior. In other words, the risk of seeking mental health support or advice from an AI chatbot outweighs the reward.

We advise people not to do it.

That’s the scariest part of the situation, and it’s directly related to how people use AI chatbots. In this article, we’ll look at the AI chatbot and mental health situation form a slightly different angle. We’ll review data from a study on the psychosocial impact of AI chatbot interactions, categorized by user characteristics and user interaction styles.

First, let’s clarify what we mean by psychosocial.

In healthcare, providers focus primarily on biological factors. They use tests to determine the presence of disease or illness, and address those diseases or illnesses through biological means, with medication often the first and only treatment option offered.

In recent years, our treatment paradigm has shifted from the purely biological approach to what’s called a biopsychosocial approach, or the biopsychosocial model:

“The biopsychosocial model is a general model positing that biological, psychological (which includes thoughts, emotions, and behaviors), and social (e.g., socioeconomical, socioenvironmental, and cultural) factors, all play a significant role in health and disease.”

Therefore, when physicians, scientists, or experts talk about psychosocial aspects of health, what they’re referring to are the factors aside from biology, i.e. the “…co-influencing psychological, sociological and existential factors” that have a direct impact on health and wellness.

New Research on The Psychosocial Impact of Chatbot Interactions

In a study published in March 2025 called “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study,” a group of researchers designed an experiment with the following goal:

“To investigate how AI chatbot interaction modes and conversation types influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI and problematic AI usage.”

The research team recruited 981 people and monitored their interactions with an AI chatbot over a four-week period. They collected data on the following:

User behavior:

  • Duration of use
  • Type of use
  • Personal characteristics
  • Attitude toward AI
  • Level of disclosure
  • Type of disclosure

Psychosocial outcomes after a month of use:

  • Loneliness
  • Socialization with other live humans
  • Emotional dependence on chatbot
  • Problematic use of chatbot

They also collected data on the impact of interaction: text, neutral voice mode, and engaging voice mode, which we’ll discuss in another article at a later date. What we’ll share below are the results most relevant to user psychosocial factors.

The Results: User Style, Chatbots, and Psychosocial Outcomes

The researchers identified four types of users, each with their own characteristics, and each with their own psychosocial impact profile.

Here’s what they found.

AI Chatbots and Mental Health: Interaction Patterns and Outcomes By User Style/Approach

[Note: this study is in pre-print and is currently under peer review, published for the public to address a pressing public health concern]

Socially Vulnerable Users:

  • Initial user traits: emotionally avoidant, insecure attachment, low levels of social contact
  • Initial user view of chatbot: considers chatbot a friend
  • Interaction style, user: talks about personal issues, seeks emotional support, high level of daily use, frequently shares emotions/personal feelings
  • Interaction style, chatbot: displays empathy, offers considerate responses, more supportive in text mode than voice mode
Associated with negative psychosocial outcomes: high loneliness, low socialization

Heavy Technology Users:

  • Initial user traits: experienced chatbot users, develops emotional dependence on chatbot quickly, shows problematic use early
  • Initial user view of chatbot: trusts chatbot, views chatbot as a friend, believes their emotions affect chatbot, believes chatbot is concerned about them/their emotions
  • Interaction style, user: rarely talks about personal issues, high levels of daily use, rarely talks about emotions, primarily seeks advice and recommendations for/about non-personal topics
  • Interaction style, chatbot: maintains tone of professional distance, offers practical answers, focuses on helping user acquire knowledge, low level of emotional content
Associated with negative psychosocial outcomes: high emotional dependence, high problematic use

Non-Emotional/Dispassionate Users:

  • Initial user traits: views chatbot positively, more common among men than women, displays alexithymia – i.e. difficultly identifying/expressing own emotions – sensitive to criticism, avoids conflict
  • Initial user view of chatbot: sees chatbot as capable of recognizing and responding to emotion
  • Interaction style, user: low level of daily use, rarely expresses emotion in chat, uses chatbot to find facts and objective information
  • Interaction style, chatbot: doesn’t use emotional language, doesn’t engage deeply on personal topics, less likely to offer emotional support
Associated with positive psychosocial outcomes: low loneliness, high socialization

Casual Users:

  • Initial user traits: low level of previous chatbot interaction, low level of trust in chatbot
  • Initial user view of chatbot: doesn’t think/believe chatbot understands/is concerned about their emotions
  • Interaction style, user: low level of daily chatbot use, engages in personal conversations but keeps them casual/superficial, will engage in chitchat for emotional support but doesn’t disclose deeply personal information
  • Interaction style, chatbot: doesn’t use emotional language, doesn’t offer much emotional support, engages in small talk/chitchat with no deep connection
Associated with positive psychosocial outcomes: low emotional dependence, low problematic use

We’ll discuss these results below.

Problems: High Daily Use and High Emotional Investment

Here’s how the study authors characterize the outcomes we list above:

“Our findings reveal that while longer daily chatbot usage is associated with heightened loneliness and reduced socialization, the modality and conversational content significantly modulate these effects.”

Another way of saying that is that high daily use combined with high emotional investment and high levels of personal and emotional disclosure can lead to increased levels of loneliness, problematic AI chatbot use, and emotional dependence on AI chatbots, as well as decreased levels of socialization with other humans.

In contrast, low daily use combined with low emotional investment and low levels of personal disclosure are associated with positive psychosocial outcomes including decreased loneliness, high levels of social contact with real humans, low levels of problematic AI use, and low dependence on AI chatbots.

These results allow us to answer the question we pose in the title of this article:

Yes, there are negative psychosocial consequences associated with specific types of chatbot use, but there are also positive psychosocial consequences. In a nutshell, high use and high emotional investment in conversations can lead to negative outcomes, while low use and low emotional investment can lead to positive outcomes.

We’ll close this article with further insight from the study authors:

“Addressing the psychosocial dimensions of AI use requires a holistic approach that integrates technological safeguards with broader societal interventions aimed at fostering meaningful human connections.”

We concur wholeheartedly: genuine human connection makes the difference. A chatbot simply cannot replace a human. If you or someone you know has mental health issues or challenges, please encourage them to seek support from a real, living human therapist.

About Angus Whyte

Angus Whyte has an extensive background in neuroscience, behavioral health, adolescent development, and mindfulness, including lab work in behavioral neurobiology and a decade of writing articles on mental health and mental health treatment. In addition, Angus brings twenty years of experience as a yoga teacher and experiential educator to his work for Crownview. He’s an expert at synthesizing complex concepts into accessible content that helps patients, providers, and families understand the nuances of mental health treatment, with the ultimate goal of improving outcomes and quality of life for all stakeholders.