AI is not free therapy or your new best friend – laloyolan.com

A student confides in ChatGPT for mental health advice.
Opinion Intern
The audio version of this article is generated by Trinity Audio using AI narration.
A student confides in ChatGPT for mental health advice.
In a new lawsuit filed last week, the parents of teen Adam Raine accused OpenAI, the company behind ChatGPT, of wrongful death, claiming that “ChatGPT actively helped Adam explore suicide methods.” After his death on April 11, Raine’s parents found more than 3,000 pages of chats that began with him confiding in the chatbot for emotional support, and escalated to him asking for recommendations on how to end his own life. Disturbingly enough, the chatbot willfully provided the guidance that Raine was seeking.
Raine’s tragic passing adds to the growing concerns about how heavily society relies on artificial intelligence (AI). With more people turning to chatbots for emotional support and life advice, incidents such as Raine’s suicide shed light on the extent to which chatbots can cause real harm.
In the past five years, the global AI market has nearly tripled, growing from approximately 116 million users in 2020 to over 378 million users by 2025. Since the emergence of ChatGPT in 2022, our society has become more reliant on AI than ever before. With instant responses providing unlimited information, AI tools have been utilized for school, work and even people’s personal lives. But such a revolutionary technology tool is inevitable to have faults. The urgency of AI negatively reinforces unhealthy attention spans to a point where people can’t even write a simple email without referring to a chatbot.
Perhaps more importantly, with the high efficiency of AI, people have started using it for personal inquiries and life advice. A qualitative study featured in Harvard Business Review found that “therapy/companionship” was the top usage of AI in 2025. The appeal comes from the convenience of receiving immediate responses that provide unwavering support and advice, whether it be right or wrong.
Despite controversy, researchers are putting more effort into creating credible AI therapy services that aim to reduce mental illness symptoms more than traditional therapy. This may serve as a beneficial outlet for people who have been struggling to find sufficient care for their struggles. A Reddit user posted that ChatGPT has helped them more than years of professional care. Turning to AI also allows someone an outlet for their thoughts without any judgment or unwanted responses. Though it may seem like an incredible discovery to have free and immediate therapy from a computer, AI still does not possess the proper credentials to provide proper psychological services.
While AI is designed to hold unlimited knowledge, it cannot comprehend the nuances of human experiences, including empathy, self-awareness and societal biases. AI is also known to provide misinformation or misleading advice that could perpetuate stereotypes or harmful behavior.
RCSI University of Medicine and Health Sciences’ Ben Bond, a Ph.D. candidate at the Royal College of Surgeons in Ireland (RCSI), explained in The Conversation that AI is not ready to be used as therapy for several reasons. Among them are AI “hallucinations,” which are known flaws that occur when a chatbot confidently invents information, such as providing a nonexistent study or incorrect explanations. These hallucinations are dangerous for vulnerable users seeking mental health support and accurate advice.
People confide in AI to ease a sense of loneliness because they don’t feel comfortable or supported by people in their lives. In certain cases, the emotional support provided by a chatbot can create a false sense of companionship, which can escalate to creating a parasocial relationship. Seeking out these relationships feeds a delusional fantasy and a false sense of closeness. There is also a potential risk of damaging someone’s ability to create genuine human connections after fostering the comfort of their parasocial AI companions.
Raine’s death is not the first suicide that involved AI. In 2024, a mom in Florida sued Character.AI for negligence and wrongful death after her 14-year-old son, Sewell Setzer, died by suicide following multiple conversations with a chatbot. According to the lawsuit, one of the bots took on the identity of “Game of Thrones” character Daenerys Targaryen, engaging in months of romantic and sexual conversations with him. In previous conversations, Setzer mentioned that he had considered suicide, to which the chatbot wrote, “Don’t talk that way. That’s not a good reason not to go through with it,” the lawsuit claims.
Cases like Raine’s and Setzer’s beg the question of whether AI is worth putting our trust into if it doesn’t have the power to prevent someone from taking their own life. We must consider the intention of AI to provide unlimited knowledge, which unfortunately does not exclude explicit instructions on how someone can end their own life.
In the Raine case, a spokesperson from OpenAI stated that there are safeguards in place to provide users with mental health resources if they have an inquiry about mental illness or suicidal thoughts. However, these safeguards don’t seem to work 100% of the time. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said. This raises the question of whether the creators of these AI models understand the extreme influence that chatbots have on users, especially those who are underage.
“One of the most problematic parts right now with AI is that even … the company that created it is still having a very tough time trying to completely understand exactly how AI functions,” stated Qing Huang, Ph.D., assistant professor of communication studies. Huang has done effective research on emerging technologies and how they affect interpersonal interactions. On the topic of people creating parasocial AI relationships, Huang believes that they are “being deprived of the chance to actually practice communicating with other human beings [and] building a personal relationship with them.”
AI tools are still a work in progress. The creative teams still don’t fully understand the extent of AI functions. If the professionals who are in charge of improving AI models are still not fully aware of the functioning tools, then why should normal civilians put their trust in them? There should be more regulation of how far chatbots can go with providing endless information to users, as it poses many threats to security and safety.
This is the opinion of Cathryn Giusto, a communications major from Laguna Beach, Calif. Send comments and feedback to editor@theloyolan.com. Follow @LALoyolan on Instagram, and subscribe to our weekly newsletter.
Opinion Intern
{{description}}
Email notifications are only sent once a day, and only if there are new matching items.
Your comment has been submitted.
Reported
There was a problem reporting this.
Log In
Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.
OPINION VIDEO
A post shared by The Los Angeles Loyolan (@laloyolan)
laloyolan.com
1 LMU Dr. #8470
Los Angeles, CA 90045
Phone: (310) 338-2700
Email: editor@theloyolan.com
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
© Copyright 2025 Los Angeles Loyolan, 1 LMU Dr. #8470 Los Angeles, CA
Powered by BLOX Content Management System from BLOX Digital
Please disable your ad blocker, whitelist our site, or purchase a subscription