Chatbots are on the rise, but customers still trust human agents more – The Conversation
Associate Professor of Quantitative Marketing and Analytics, McGill University
Vivek Astvansh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
McGill University provides funding as a member of The Conversation CA.
McGill University provides funding as a member of The Conversation CA-FR.
View all partners
Customers contact companies regularly to purchase products and services, inquire about orders, make payments and request returns. Until recently, the most common way for customers to contact companies was through phone calls or by interacting with human agents via company websites and mobile apps.
The advent of artificial intelligence (AI) has seen the profileration of a new kind of interface: chatbots. A chatbot is an intelligent software program that can carry out two-way conversations with customers.
Spurred by the potential of chatbots to communicate with customers round-the-clock, companies are increasingly routing customers to chatbots. As such, the worldwide chatbot market has grown from US$370 million in 2017 to about US$2.2 billion in 2024.
As these tools become more embedded in customer service systems, understanding customer preferences and behaviours is crucial.
Despite the enthusiasm on the business side for chatbots, customers are far less convinced. A recent survey found that 71 per cent of customers prefer interacting with a human agent rather than a chatbot. Sixty per cent of customers also report that chatbots often fail to understand their issue.
Underlying these preferences is a broader skepticism about AI, as the majority of customers report low trust in it.
Most companies today use chatbots as the first line of customer support. Only when a chatbot fails to provide the necessary information or a customer asks to speak with someone does the conversation shift to a human agent.
While efficient, this one-size-fits-all approach may be sub-optimal because customers may prefer a human agent for some types of services and a chatbot for others.
For example, a recent survey found 47 per cent of Canadians are comfortable letting a company use their purchase history for marketing, but only nine per cent are comfortable letting the company use their financial information.
To better understand how customers actually interact with chatbots versus human agents, I partnered with a large North American retailer and analyzed over half a million customer service interactions between customers and either agents or chatbots.
I used machine learning methods to conduct three analyses on the chat transcripts.
The first focused on why customers reach out to customer service in the first place. I found most inquiries fell into six main categories: orders, coupons, products, shipping, account issues and payments. Customers rarely turned to chatbots for questions related to shipping or payment, seemingly preferring human agents when their issue involves more detailed or sensitive information.
The second analysis measured how closely the language used by customer service agents — both human and bot agents — matched the language of the customers they were interacting with. It found human agents showed a higher degree of linguistic similarity to customers than chatbots did.
This result was unexpected. Given the sophistication of today’s AI, I had anticipated chatbots would be able to closely mimic customer language. Instead, the findings suggest human agents are better able to follow customers’ varied and dynamically changing language use.
The third analysis tested the thesis that similarity breeds liking — a concept that suggests human agents’ similarity with customers should increase customer’s engagement.
I measured customer engagement by the average number of seconds between a customer’s consecutive messages during a chat. The results show that when human agents displayed higher linguistic similarity, customers responded more quickly and frequently. The more the customer felt “understood,” the more engaged they were.
My research findings make three recommendations to companies. First, companies should identify the reason behind each customer inquiry before assigning that customer to a chatbot or a human agent. The reason should determine whether the company matches the customer to a bot agent or a human agent.
Second, both chatbots and human agents should be trained to adapt their language and communication style to match that of the customer. For human agents, this kind of mirroring may come naturally, but for chatbots, it must be programmed.
My research shows that customers are more engaged when they feel that the agent they are chatting with understands them and communicates in a similar way. Doing this will keep customers engaged and lead to more effective and efficient interactions.
Third, businesses should ask technology companies for evidence on how much their chatbots increase effectiveness and efficiency relative to human agents. Specifically, how do their chatbots compare to human agents in terms of efficiency and customer satisfaction? Only if the metrics exceed a certain threshold should companies consider using chatbots.
Customers want to feel understood and supported — and for now, that often still means talking to a real person. Rather than seeing chatbots as a wholesale replacement, companies should treat them as part of a hybrid approach that respects customer preferences and aligns the right tool with the right task.
Write an article and join a growing community of more than 206,700 academics and researchers from 5,244 institutions.
Register now
Copyright © 2010–2025, The Conversation US, Inc.