Chatbots are inexpensive but can cause customer irritation

Chatbots are expected to account for 95 percent of online customer service interactions by 2025, but research from QUT has found that failure to meet customer expectations also leads to user frustration, reducing the likelihood of a purchase and creating anger.

Associate Professor Paula Dootson of QUT Business School and Senior Research Fellow at the QUT Center for the Digital Economy and Fellow of the QUT Center for Future Enterprise is co-author of Chatbots and service failure: When does it lead to customer aggression, just published in the Journal of Retailing and Consumer Services was released.

The study found that customers become less aggressive in the event of a chatbot’s service outages if they are given early warning that human intervention is possible if needed, and suggests that companies using chatbots should rewrite their scripts to this end.

“The current chatbot market is valued at $17.7 billion and is projected to reach $102.29 billion by 2026. Most people will have encountered a chatbot at some point as they are now widely used in almost every industry,” said Professor Dootson.

“This ability to understand natural language and engage in conversations allows chatbots not only to provide customer service, but also to improve the customer experience by reducing customer effort and allowing those customers to use time elsewhere more efficiently.

Associate Professor Paula Dootson. Photo by Philip Johannes Van Den Berg

“Lego used a chatbot called Ralph to help customers navigate their product portfolio and choose a perfect gift to buy.

“Despite the economic benefits for companies, using chatbots for service calls often fails to meet customer expectations, can degrade the customer service experience, and lead to service outages.

“In Japan, a virtual hotel assistant robot was fired in 2019 for repeated malfunctions such as snoring for voice commands and waking guests, a critical failure in service delivery.

“Beyond speech recognition issues, the scripts that chatbots rely on to respond to customers can become problematic when the chatbot misinterprets a request, making it difficult for the chatbot assistant to respond to the customer in a meaningful way.

“Users may then feel frustrated and annoyed, reluctant to use chatbots in the future, less likely to make the purchase, or even switch to a different service provider altogether.”

Her study, conducted alongside Texas A&M University-Corpus Christi assistant professor Yu-Shan (Sandy) Huang, examines how artificial intelligence technology is transforming the way services are delivered and opportunities for new sources of service outages opened.

“We found that in the context of a chatbot service outage, a customer’s notification late in the service interaction that a human agent is available to assist can result in a greater likelihood of customer aggression,” Professor Dootson said .

“There is still a historical expectation that real humans will be available to help customers when they encounter a technology-related service outage, but it remains unclear how the presence of human agents will affect customer responses to a chatbot-induced service outage can.

“Our results show that disclosing the option to engage with a human agent late in the chatbot interaction, after the service is down, increases the likelihood of emotion-focused coping, which can lead to customer aggression.

“Unexpectedly, however, we found that when customers perceived high levels of engagement, the positive relationship turned negative, as customers were more likely to respond with emotion and aggression when the chatbot service went down when they were offered to engage with a human agent early on interact (vs. late) in the service interaction.

“This could be because customers with a higher level of involvement often value the relationship building process of co-creating services, they may prefer interacting with a human agent. So, disclosing the option to interact with a human agent early on can signal that a service provider has the human resources to support customers, but does not value customers enough to initiate the interaction in this way.”

Professor Dootson said the findings offer several practical implications for managing encounters with chatbot services.

“This study provides companies with evidence that customers respond to late disclosure of human intervention with customer aggression when they encounter chatbot outages,” she said.

“In turn, service providers should develop chatbot scripts that expose the possibility of interacting with a human agent early in the customer-chatbot interaction, alerting customers to potential human intervention before chatbot service outages occur.”

Read the full paper online in the Journal of Retailing and Consumer Services

/University release. This material from the original organization/author(s) may be post-date in nature and may be edited for clarity, style and length. The views and opinions expressed are those of the author(s).