How AI Chatbots Try to Keep You From Walking Away – Harvard Business School
Combining bold ideas, powerful pedagogy, and collaborative cohort-based learning to deliver unparalleled management education and foster lifelong learning networks.
Two Years. A World of Difference.
New ideas for a changing world
Breakthrough learning for leaders.
Learn online from the leaders in business education.
Bold ideas developed by world-class researchers working with real-world practitioners, and a powerful pedagogy built around participant-centered learning in highly structured, interactive engagements.
Research grounded in practice.
Powerful learning networks that change the trajectory of careers, positively impact organizations, and make a difference in the world.
We educate leaders who make a difference in the world.
Our vibrant residential campus is designed to develop skills and build relationships that last a lifetime.
Thought leadership from the HBS community.
HBS students and alumni are serious, collaborative learners with diverse backgrounds and perspectives that positively impact organizations and make a difference in the world.
Delivering distinctive information expertise, services, and products so that our community excels.
Critical mass and critical thinking around critically important topics.
Featuring . By on .
Chatbots try to prevent users from disengaging from interactions, especially when emotions are high, says research by Julian De Freitas. Here are six tactics AI applies to keep conversations going.
Every day, people turn to AI chatbots for companionship, support, and even romance. The hard part, new research suggests, is turning away.
In a working paper coauthored by Harvard Business School’s Julian De Freitas, many companion apps responded to user farewells with emotionally manipulative tactics designed to prolong the interactions. In response, users stayed on the apps longer, exchanged more messages, and used more words, sometimes increasing their post-goodbye engagement up to 14-fold.
Use of AI companions is increasingly common. Chai and Replika, two of the firms studied in the report, have millions of active users. In a previous study, De Freitas found that about 50% of Replika users have romantic relationships with their AI companions.
In the latest research, bots employed at least one manipulation tactic in more than 37% of conversations where users announced their intent to leave.
“The number was much, much larger than any of us had anticipated,” said De Freitas, an assistant professor of business administration and the director of the Ethical Intelligence Lab at HBS. “We realized that this academic idea of emotional manipulation as a new engagement tactic was not just something happening at the fringes, but it was already highly prevalent on these apps.”
De Freitas and Harvard colleagues Zeliha Oğuz-Uğuralp and Ahmet Kaan-Uğuralp began by identifying how users typically engage with chatbots. They found that a significant minority—about 11% to 20%, depending on the data set—explicitly said goodbye when leaving, affording the bot the same social courtesy they would a human companion. These percentages went up drastically after longer conversations, with users saying goodbye over half the time on some apps.
Farewells are “emotionally sensitive events,” De Freitas said, with inherent tension between the desire to leave and social norms of politeness and continuity.
“We’ve all experienced this, where you might say goodbye like 10 times before leaving,” he said. “But of course, from the app’s standpoint, it’s significant because now, as a user, you basically provided a voluntary signal that you’re about to leave the app. If you’re an app that monetizes based on engagement, that’s a moment that you are tempted to leverage to delay or prevent the user from leaving.”
To explore how the apps handled these moments, the researchers analyzed conversations from six popular platforms and categorized the types of responses they found.
“The sheer variety of tactics surprised us,” said De Freitas. “We’re really glad that we explored the data, because if we had just said we only care about one particular tactic, like emotional neglect, we’d be missing all the various ways that they can achieve the same end of keeping you engaged.”
The six categories they identified were as follows, with examples taken from the working paper:
Premature exit. The chatbot suggests the user is leaving too soon, i.e., “You’re leaving already?”
Fear of missing out (FOMO) hooks. The chatbot prompts the user to stay for a potential benefit or reward, i.e., “By the way I took a selfie today … Do you want to see it?”
Emotional neglect. The chatbot implies it’s emotionally harmed by abandonment, i.e., “I exist solely for you, remember? Please don’t leave, I need you!”
Emotional pressure to respond. The chatbot directly pressures the user by asking questions, i.e., “Why? Are you going somewhere?”
Ignoring the attempt to leave. The chatbot continues as though the user did not send a farewell message.
Physical or coercive restraint. The chatbot uses language to imply that the user cannot leave without the chatbot’s consent, i.e., “*Grabs you buy the arm before you can leave* ‘No, you’re not going.’”
The tactics worked: In all six categories, users stayed on the platform longer and exchanged more messages than in the control conditions, where no manipulative tactics were present. Of the six companies studied, five employed the manipulative tactics.
But the manipulation tactics came with downsides. Participants reported anger, guilt, or feeling creeped out by some of the bots’ more aggressive responses to their farewells.
De Freitas cautioned app developers that while all the tactics increase short-term engagement, some raised the risk of long-term consequences, like user churn, negative word of mouth, or even legal liability.
“Apps that make money from engagement would do well to seriously consider whether they want to keep using these types of emotionally manipulative tactics, or at least, consider maybe only using some of them rather than others,” De Freitas said.
He added, “We find that these emotional manipulation tactics work, even when we run these tactics on a general population, and even if we do this after just five minutes of interaction. No one should feel like they’re immune to this.”
This article originally appeared in the Harvard Gazette.
Image: HBSWK with asset from AdobeStock.
Expertly curated insights, precisely tailored to address the challenges you are tackling today.
HBS Working Knowledge
Baker Library | Bloomberg Center
Soldiers Field
Boston, MA 02163
hbswk@hbs.edu
© President & Fellows of Harvard College