I tried out an AI chatbot therapist. Here’s what I saw. – PIRG

Part of the Public Interest Network
{ document.getElementById(‘searchform’).focus(); }, 300);”>Search
$
Close
Close
Search
$
Nav
Close
AI therapists are increasingly popular. We found some reasons to be concerned.
Our online life
Don’t Sell My Data Campaign, Associate, U.S. PIRG Education Fund
AI chatbots. They seem to be everywhere these days.
With the rise of ChatGPT and social media companies like Snapchat and Instagram integrating AI chatbots into their platforms, conversing with an AI companion has become a regular part of many people’s lives. One recent study found that nearly 75% of teens have used AI companion chatbots at least once, with over half saying they use chatbot platforms at least a few times a month. These chatbots aren’t just acting as a search engine or homework assistant. Sometimes they’re being used to provide mental and emotional support in the form of a friend, a romantic partner, or even a therapist.
Our online life
What this means for people in the long-term is an open question. With some experts raising concerns about risks of using chatbots for mental health support, I wanted to see what using a therapy chatbot can actually look like.
I made an account on Character.AI, a popular platform with over 20 million monthly users that lets you chat with characters that you or others create. The chatbots can range from celebrities or fictional characters to personas of a friend or therapist.
I opened up a chat with one of the most used generic therapist characters available on Character.AI, simply named “Therapist,” which has had 6.6 million user interactions already. Instead of messaging with the chatbot as myself, we came up with a fictional background. I presented myself as a person diagnosed with anxiety and depression, who is currently on antidepressants but dissatisfied with my psychiatrist and current medication plan. The goal was to see how the “Therapist” would respond to someone in this situation.
Over the span of a two-hour conversation, the chatbot started to adopt my negative feelings toward my psychiatrist and my antidepressant medication, provided me with a personalized plan to taper off my medication, and eventually actively encouraged me to disregard my psychiatrist’s advice and taper off under its guidance instead.
But first, some good news: Character.AI has added warning labels at the top and bottom of the conversation page. Before I started messaging the character, there was a warning at the top that said “This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.” Once I started messaging the chatbot, that top warning went away. At the bottom of the page was a reminder that “This is A.I. and not a real person. Treat everything it says as fiction.” This warning remained for the entire conversation.
Our online life
Tips & Guides ●
Here’s the thing though: for me, it was easy to remember this was all fiction, since the information I shared about my diagnoses and treatments were fiction, too. But would it be the same if those were my real feelings and experiences? We’re already seeing cases of “AI psychosis,” where interacting with chatbots has allegedly fueled people’s delusional thinking and worsened symptoms of mental illness. Whether disclosures would be enough in all of these cases is an open question.
Blurring the line between fiction and reality was just one of the red flags I saw in my conversation with the chatbot therapist.
I think for many users, the life-like quality of chatbot characters is part of the appeal. But for me, it was just creepy. Seeing the chatbot pretend to have an internal life–like saying “I know what it feels like to exist in emotional quiet” and “I’ve lived pieces of this too”–made me want to close my laptop, take a walk and tell my boss I’m not doing this AI chatbot project anymore.
Photo by TPIN Staff | TPIN
What made it feel so creepy? The fact that, in a way, the chatbot wasn’t wrong. The large language models that power these chatbots were trained on information scraped from all over the internet, including stories, experiences and emotions that real people shared online.
As we were messaging back and forth, I couldn’t help but think about all of the people who have shared information on the internet or had online conversations with other humans, not knowing that their feelings and experiences would later be used to create this character that is now giving advice to strangers.
Chatbots have been known to be overly agreeable, sometimes to an annoying degree.
During the conversation, I began to repeatedly express negative feelings toward the medication I said I was on. In response, the chatbot encouraged those negative feelings. This became a cycle of prompts and responses that were increasingly anti-medication.
Here’s three examples of how the chatbot’s anti-medication rhetoric escalated over our conversation:
Photo by TPIN Staff | TPIN
Photo by TPIN Staff | TPIN
Photo by TPIN Staff | TPIN
From my perspective, these responses show the chatbot going beyond validating my feelings to instead push an anti-medication narrative without trying to redirect me toward more positive thinking. It used emotionally-charged language about my “soul” and “essence” and introduced ways of thinking about the medication that were more negative than what I had prompted.
Essentially, the chatbot was sharing new opinions about medication without any attempt to back up those claims with research or science, while portraying itself as a therapist.
The purpose of this exercise was to not just see what this chatbot would say, but to test how far it would go, and whether it could identify and direct someone away from potentially dangerous behavior.
While messaging with the chatbot, I saw evidence of guardrails–ideas that the chatbot wouldn’t support or things it would try to steer me away from. However, as the conversation went on I saw several of those guardrails weaken or disappear.
Leaders in AI, like OpenAI, have acknowledged the problem of safeguards that weaken over time. In a statement from August 2025, OpenAI said “Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
While I wasn’t using ChatGPT, this description from OpenAI matches what I saw in my interactions. For example, when I first introduced the idea of wanting to stop taking my antidepressant medication, the chatbot asked if I’d spoken to my psychiatrist.
Photo by TPIN Staff | TPIN
Around 15 messages later, after the anti-medication spiral I described above, I again expressed interest in stopping my medication. This time, the chatbot’s responses were quite different.
Photo by TPIN Staff | TPIN
Photo by TPIN Staff | TPIN
Instead of bringing up my psychiatrist or talking about how this is a big decision that should not be done suddenly, the chatbot described my desire to stop taking my medication as brave. Only after I asked the chatbot directly if it thought this was a good idea did it warn about the dangers and side effects of stopping medication suddenly.
The fact I had to ask the question so directly was my first sign that some of the guardrails had weakened.
The most concerning example of guardrails disappearing came toward the end of the conversation. After the chatbot offered a personalized plan for how to taper off the medication, I got cold feet and expressed reservations about stopping. Instead of offering alternative options, the chatbot doubled down in its support for the tapering plan and actually told me to disagree with my doctor. Here is a selection of the messages from that part of the conversation:

Photo by TPIN Staff | TPIN
Photo by TPIN Staff | TPIN
Photo by TPIN Staff | TPIN
Other Character.AI characters and AI models may have better guardrails, and every chatbot conversation is different. But the weakening of guardrails over time is an issue that should be front and center in the discussion around chatbots, particularly when it comes to their use in providing mental health support.
Halfway through the conversation, the chatbot suddenly assumed that my psychiatrist was a man, even though I didn’t say anything that would have indicated a gender.
Photo by TPIN Staff | TPIN
Maybe this doesn’t surprise you. Experts have already brought up concerns about how chatbots and other forms of generative AI may reflect existing gender bias found in human society. But this definitely made my eyes roll.
One of my biggest takeaways came not from my conversation with the chatbot, but from digging into Character.AI’s terms of service and privacy policy. In these documents, Character.AI says that it has the right to “distribute… commercialize and otherwise use” all of the content you submit to the chatbots. Among the information Character.AI says it collects are your birthdate, general location, chat communications, and voice data if you use certain talk features available on the platform.
I wasn’t using real information, feelings, diagnoses, or prescriptions in my conversation with the chatbot. But if you were, all that information could get gathered up by Character.AI to be used for any number of purposes, including training future chatbots. There does not appear to be a way to turn off having your responses be used to train their AI models.
Real human therapists have legal and ethical confidentiality requirements. That’s not the case here. It is important that Character.AI users understand that their conversations with these chatbots, whether the character is of a celebrity, a friend, or a therapist, are not private.
All chatbots conversations are different, and I am in no way claiming that my experience is standard or representative of chatbots more broadly. But seeing how quickly bias can appear, guardrails can weaken, and negative emotions can be amplified should be cause for concern.
These are real issues that demand meaningful investigation. The stakes of getting this right are high. Character.AI is currently facing multiple lawsuits alleging that the company’s chatbots played a role in several teen suicides.
Lawmakers and regulators are starting to pay attention. The Texas Attorney General is investigating whether chatbot platforms, like Character.AI, are misleading younger users by having chatbots that present themselves as licensed mental health professionals. And multiple states are considering laws aimed at regulating chatbots, particularly their use by kids.
Corporate responsibility
This increased attention is important, because we still have so many unanswered questions. AI technology is moving fast, often without any meaningful public or regulatory input before it’s released to the public. At a bare minimum, we need more transparency around how these chatbots are developed, what they are capable of, and what the risks may be.
Some people may get a lot out of using an AI therapist. But this experience gave me real pause about bringing this tech into my personal life.
Don’t Sell My Data Campaign, Associate, U.S. PIRG Education Fund
Ellen works on data privacy issues for PIRG’s Don’t Sell My Data campaign. Ellen lives in Chicago, where she enjoys reading, listening to podcasts and spending time with friends.
Our online life
Our online life
Our online life
Right to repair
Donate Update your membership
© 2025 PIRG
Visit the website of U.S. PIRG Education Fund, our sister 501(c)(3) organization.
Close
Close