Study finds users disclose more to AI chatbots introduced as human – PsyPost


[Adobe Stock]
People tend to share more about themselves when they believe they are interacting with a fellow human, even when that person is actually a chatbot. A new study published in Computers in Human Behavior: Artificial Humans found that participants disclosed more personal information, and rated their interactions as more emotionally supportive, when they were led to believe that the chatbot they were speaking with was human.
Chatbots are increasingly being used in mental health services, offering support for people managing anxiety, depression, and other challenges. These conversational agents are valued for their accessibility and lack of judgment. Some people may even prefer opening up to a machine, feeling it offers a safe space free from social pressure. But others see chatbots as limited in emotional understanding, potentially reducing their effectiveness in sensitive or therapeutic conversations.
There is ongoing debate over whether chatbots help or hinder emotional openness. A key challenge is that these artificial agents lack genuine empathy. Although they can mimic human conversation, they often do so by drawing on patterns in language rather than true emotional understanding.
“We were drawn to this topic from two directions that came together very naturally. My student and first author, Gabriella, was deeply interested in how people interact with chatbots in everyday life, especially as more of these systems are used in the context of mental health. At the same time, in my previous work, I had found that people tend to disclose more to the human partner than to an artificial one (e.g., a robot or a chatbot) when they interact with them for the first time,” said Guy Laban, an assistant professor at Ben-Gurion University of the Negev.
“That pattern made us wonder: is it really about the actual human sitting there, or can a simple framing, telling people they’re talking to a ‘human’ versus a ‘chatbot/AI,’ when everything else is identical and there are no visual or bodily cues at all, already change how much they open up and how they perceive the interaction? This question about how a tiny shift in expectations might shape very personal, emotional sharing is what motivated us to design the current study.”
For their study, the researchers designed a controlled experiment where 22 English-speaking adults interacted with two chatbots. Both chatbots used the same language and followed the same conversation script, but each was introduced differently. One was described as a chatbot named “Chatbot D12.” The other was introduced as a human research assistant named “Sam,” though in reality, both were identical chatbots.
Each participant interacted with both chatbots in random order. The chatbots asked questions about neutral but personal topics, such as physical health, work, social life, and personal goals. The researchers then measured how much participants shared (in number of words), how emotionally expressive their answers were (using sentiment analysis), and how they rated the conversation in terms of comfort, friendliness, trust, and the chatbot’s sense of having a mind.
The researchers found that participants shared more with the chatbot introduced as human. On average, people wrote longer responses and also felt they had disclosed more during that interaction. They also rated Sam, the chatbot believed to be human, as more capable of feeling and acting—what researchers refer to as “agency” and “experience.” Participants also described the interaction with Sam as more comforting.
A particularly notable detail is that most participants suspected that both agents were actually chatbots. Still, they behaved differently based on how the chatbot was framed. This suggests that people respond automatically to social cues—such as being told they are speaking to a person—even if they do not fully believe those cues.
“One of the most surprising findings was that participants disclosed more and felt more comforted by a chatbot introduced as a human, even though almost everyone knew they were still talking to a chatbot. This means the effect wasn’t driven by deception or belief that the chatbot was human, but rather by the framing itself, how the chatbot was introduced and named. That subtle change alone was enough to activate more social and affective responses. Therefore, people’s behaviour toward chatbots can be shaped not just by what the chatbot does, but by what they expect it to be, showing how powerful simple context cues are in guiding our interactions with AI.”
Not all the differences favored the chatbot presented as a human. Although participants disclosed less to Chatbot D12, they rated it as slightly friendlier. Their answers to D12 were also more sentimental, meaning they expressed stronger emotions, both positive and negative. Despite these differences, participants did not rate either chatbot as significantly more trustworthy, and both were rated similarly in terms of overall interaction quality.
“When framing a chatbot more like a person, by giving it a human name and introducing it as a human, people tend to open up more, attribute social traits to it, and feel more comforted when speaking with it, even when they suspect it’s still a bot. But there’s a catch: when a ‘human-like’ chatbot doesn’t fully meet our social expectations, people judge it as less friendly or trustworthy. So, design cues that make chatbots feel human can encourage self-disclosure, but they need to be balanced with transparency and realistic expectations.”
“While these effects aren’t dramatic in magnitude, they reveal something important: subtle framing alone can change how socially people respond to a chatbot/AI, even when they know it isn’t human. This suggests that the way chatbots are presented can meaningfully shape users’ emotional engagement and self-disclosure in real-world applications, like mental health AI support, customer support, or educational settings. In other words, the study’s practical significance lies in showing that framing and design choices can have real social and emotional consequences in human–AI communication.”
The researchers also highlight the importance of ethical considerations. Designing chatbots that appear more human could support emotional expression, but intentionally misleading users raises concerns, especially in mental health contexts. Deception, even subtle, can harm trust and may not be appropriate when users are vulnerable.
“A key point to clarify is that the study does not advocate for deceiving users or pretending that chatbots are human. The results show that social framing can change how people respond, but this effect occurred even when participants knew it wasn’t human.”
As with all research, the new study has some limitations. The sample size was small, with only 22 participants, and the conversations were short and took place in a controlled online setting. It is unclear whether the same effects would occur in longer or more emotionally intense conversations. Also, while people disclosed more to the chatbot they believed was human, their behavior may have been influenced by the novelty of the task or a desire to comply with perceived expectations.
“We want to scale this up with larger and more diverse samples, bring it into real-world settings (including sensitive topics), and test with various LLM-based agents (via LEXI, Large Language Models Experimentation Interface, an open source graphical interface I developed for such behavioural experiments online, see https://doi.org/10.1145/3687272.3688296) and embodied agents (like social robots) to examine the extent to which socio-affective linguistic and identity cues can responsibly affect healthy disclosure, as well as the introduction of these agents in well-being settings. Therefore, we will also examine how these interactions make people feel and reappraise their emotions over time (see e.g., https://doi.org/10.1007/s12369-024-01207-0, https://doi.org/10.48550/arXiv.2503.18243).
“This project is part of a series of studies in which we examined how people self-disclose and socially share their emotions with artificial agents and robots, why people do that, and how it makes them feel (see a review paper on the matter, including a synthesis of some of the studies conducted: https://ieeexplore.ieee.org/abstract/document/10700607).”
The study, “Knowledge cues to human origins facilitate self-disclosure during interactions with chatbots,” was authored by Gabriella Warren-Smith, Guy Laban, Emily-Marie Pacheco, and Emily S. Cross.

Login to your account below




Please enter your username or email address to reset your password.



source

Jesse
https://playwithchatgtp.com