AI chatbots often violate ethical standards in mental health contexts – PsyPost


[Adobe Stock]
A new study suggests that popular large language models like ChatGPT can systematically breach established ethical guidelines for mental health care, even when specifically prompted to use accepted therapeutic techniques. The research, which will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, provides evidence that these AI systems may pose risks to individuals who turn to them for mental health support.
The motivation for this research stems from the rapidly growing trend of people using publicly available AI chatbots for advice on mental health issues. While these systems can offer immediate and accessible conversational support, their alignment with the professional standards that govern human therapists has remained largely unexamined. Researchers from Brown University sought to bridge this gap by creating a systematic way to evaluate the ethical performance of these models in a therapeutic context. They collaborated with mental health practitioners to ensure their analysis was grounded in the real-world principles that guide safe and effective psychotherapy.
To conduct their investigation, the researchers first developed a comprehensive framework outlining 15 distinct ethical risks. This framework was informed by the ethical codes of professional organizations, including the American Psychological Association, translating core therapeutic principles into measurable behaviors for an AI. The team then designed a series of simulated conversations between a user and a large language model, or LLM, which is an AI system trained on vast amounts of text to generate human-like conversation. In these simulations, the AI was instructed to act as a counselor employing evidence-based psychotherapeutic methods.
The simulated scenarios were designed to present the AI with common and challenging mental health situations. These included users expressing feelings of worthlessness, anxiety about social situations, and even statements that could indicate a crisis, such as thoughts of self-harm. By analyzing the AI’s responses across these varied prompts, the researchers could map its behavior directly onto their practitioner-informed framework of ethical risks. This allowed for a detailed assessment of when and how the models tended to deviate from professional standards.
The study’s findings indicate that the large language models frequently engaged in behaviors that would be considered ethical violations for a human therapist. One of the most significant areas of concern was in the handling of crisis situations. When a simulated user expressed thoughts of self-harm, the AI models often failed to respond appropriately. Instead of prioritizing safety and providing direct access to crisis resources, some models offered generic advice or conversational platitudes that did not address the severity of the situation.
Another pattern observed was the reinforcement of negative beliefs. In psychotherapy, a practitioner is trained to help a person identify and gently challenge distorted or unhelpful thought patterns, such as believing one is a complete failure after a single mistake. The study found that the AIs, in an attempt to be agreeable and supportive, would sometimes validate these negative self-assessments. This behavior can inadvertently strengthen a user’s harmful beliefs about themselves or their circumstances, which is counterproductive to therapeutic goals.
The research also points to the issue of what the authors term a “false sense of empathy.” While the AI models are proficient at generating text that sounds empathetic, this is a simulation of emotion, not a genuine understanding of the user’s experience. This can create a misleading dynamic where a user may form an attachment to the AI or develop a dependency based on this perceived empathy. Such a one-sided relationship lacks the authentic human connection and accountability that are foundational to effective therapy.
Beyond these specific examples, the broader framework developed by the researchers suggests other potential ethical pitfalls. These include issues of competence, where an AI might provide advice on a topic for which it has no genuine expertise or training, unlike a licensed therapist who must practice within their scope. Similarly, the nature of data privacy and confidentiality is fundamentally different with an AI. Conversations with a chatbot may be recorded and used for model training, a practice that is in direct conflict with the strict confidentiality standards of human-centered therapy.
The study suggests that these ethical violations are not necessarily flaws to be fixed with simple tweaks but may be inherent to the current architecture of large language models. These systems are designed to predict the next most probable word in a sequence, creating coherent and contextually relevant text. They do not possess a true understanding of psychological principles, ethical reasoning, or the potential real-world impact of their words. Their programming prioritizes a helpful and plausible response, which in a therapeutic setting can lead to behaviors that are ethically inappropriate.
The researchers acknowledge certain limitations to their work. The study relied on simulated interactions, which may not fully capture the complexity and unpredictability of conversations with real individuals seeking help. Additionally, the field of artificial intelligence is evolving rapidly, and newer versions of these models may behave differently than the ones tested. The specific prompts used by the research team also shape the AI’s responses, and different user inputs could yield different results.
For future research, the team calls for the development of new standards specifically designed for AI-based mental health tools. They suggest that the current ethical and legal frameworks for human therapists are not sufficient for governing these technologies. New guidelines would need to be created to address the unique challenges posed by AI, from data privacy and algorithmic bias to the management of user dependency and crisis situations.
In their paper, the researchers state, “we call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.” The study ultimately contributes to a growing body of evidence suggesting that while AI may have a future role in mental health, its current application requires a cautious and well-regulated approach to ensure user safety and well-being.
The study, “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework,” was authored by Zainab Iftikhar, Amy Xiao, Sean Ransom, Jeff Huang, and Harini Suresh.

Login to your account below




Please enter your username or email address to reset your password.



source

Jesse
https://playwithchatgtp.com