Don’t miss the latest developments in business and finance.

Giving human touch to chatbots may not help: Study

Image
Press Trust of India Washington
Last Updated : Apr 19 2019 | 4:40 PM IST

Giving chatbots human names or adding humanlike features to its avatar may not be enough to win over a user if the device fails to maintain a conversation, researchers, including one of Indian origin, suggest.

According to researchers from The Pennsylvania State University in the US, those humanlike features may create a backlash against less responsive humanlike chatbots.

In a study, chatbots that had human features -- such as a human avatar -- but lacked interactivity, disappointed people who used it.

However, people responded better to a less-interactive chatbot that did not have humanlike cues, said S Shyam Sundar, a professor at Penn State.

High interactivity is marked by swift responses that match a user's queries and feature a threaded exchange that can be followed easily, according to Sundar.

"People are pleasantly surprised when a chatbot with low anthropomorphism -- fewer human cues -- has higher interactivity," said Sundar.

Also Read

"But when there are high anthropomorphic visual cues, it may set up your expectations for high interactivity -- and when the chatbot doesn't deliver that -- it may leave you disappointed," he said.

On the other hand, improving interactivity may be more than enough to compensate for a less-humanlike chatbot.

Even small changes in the dialogue, like acknowledging what the user said before providing a response, can make the chatbot seem more interactive, said Sundar.

"In the case of the low-humanlike chatbot, if you give the user high interactivity, it's much more appreciated because it provides a sense of dialogue and social presence," said lead author of the study, Eun Go, a former doctoral student at Penn State and currently assistant professor at Western Illinois University.

Because there is an expectation that people may be leery of interacting with a machine, developers typically add human names to their chatbots -- for example, Apple's Siri -- or program a human-like avatar to appear when the chatbot responds to a user.

The study, published in Computers in Human Behavior, currently, also found that just mentioning whether a human or a machine is involved -- or, providing an identity cue -- guides how people perceive the interaction.

"Identity cues build expectations," said Go said.

"When we say that it's going to be a human or chatbot, people immediately start expecting certain things."

Disclaimer: No Business Standard Journalist was involved in creation of this content

More From This Section

First Published: Apr 19 2019 | 4:40 PM IST

Next Story