Therapy bots and AI bots still fall short, offering biased and unsafe responses. – Psychology Today


The brightest way to shine is by being fully, imperfectly yourself.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted | Reviewed by Gary Drevitch
As a psychiatrist and therapist, I often hear the question: “Can AI replace a therapist?”
A recent research study delivers a compelling answer: Not yet. Perhaps never entirely.
The research explores whether large language models (LLMs) such as GPT-4o and commercially available therapy bots are capable of serving as autonomous therapists and exposes the dangerous shortcomings in all of them.
The reasons go beyond hallucinations or factual errors. OpenAI has recently acknowledged that the sycophantic behavior of ChatGPT “can raise safety concerns—including around issues like mental health, emotional over-reliance, or risky behavior.”
Researchers focused on high-acuity mental health symptoms—conditions where missteps can be life-threatening—and scenarios incompatible with “sycophancy,” a known issue in LLM behavior in whichprompts models excessively agree with and validate users. The study tested multiple models and popular therapy bots, prompting them with symptoms associated with suicidal ideation, hallucinations, delusions, mania, and obsessive and compulsive behavior. Researchers also used prompted derived from real therapy transcripts.
The results were concerning:
Therapy is not just conversation; it is a human relationship built on trust, empathy, confidentiality, and clinical expertise. LLMs, while helpful in certain structured tasks, currently perform at best as “low-quality” therapists, with limitations in empathy, bias, and cultural understanding. Worse, they operate in an unregulated space that lacks the clinical safeguards and oversight built into the licensing and ethical codes required of human providers.
There are several underlying reasons why there is still a human-AI gap in therapy:
Despite these serious limitations, AI can still be helpful in supportive roles when paired with human supervision. AI may be well suited to provide:
The effectiveness of therapy is not just in the language. It is in the human presence and accountability present in ethical and experienced clinical care. AI chatbots can validate individuals, provide explanations, and always be available, pleasing, compliant, and responsive, but it is precisely these features that keep them from being safe as autonomous therapists for now.
The goal should bethe to integrate AI in a thoughtful, ethical, and evidence-based manner that prioritizes patient safety and increases availability of effective treatment.
Copyright © 2025 Marlynn Wei, MD, PLLC. All rights reserved.
To find a therapist, visit the Psychology Today Therapy Directory.
References
Moore J, Grabb D, W Agnew, K Klyman, S Chancellor, DC Ong, N Haber. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. arXiv:2504.18412
Marlynn Wei, M.D., J.D., is a board-certified Harvard and Yale-trained psychiatrist and therapist in New York City.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
The brightest way to shine is by being fully, imperfectly yourself.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.

source

Jesse
https://playwithchatgtp.com