A team at Bielefeld University in Germany invited participants in the lab and asked them to jump into the shoes of their robotic bartender called James.
The participants looked through the robot's eyes and ears and selected actions from its repertoire.
"We asked ourselves how a human bartender solves the problem and whether a robotic bartender can use similar strategies," said lead researcher Jan de Ruiter, from Bielefeld University.
"We teach James how to recognise if a customer wishes to place an order," said de Ruiter.
More From This Section
This data was recorded during a trial session with the bartending robot James at its own mock bar in Munich.
For the trial, customers were asked to order a drink with James and to rate their experience afterwards. In the lab, the participants observed on the screen what the robot had recognised at the time.
For example, they were shown if customers said something ("I would like a glass of water, please") and how confident the robotic speech recognition had been.
"This is similar to selecting an action from a character's special abilities in a computer game. For example, they could ask which drink the customer would like ("What would you like to drink?"), turn the robot's head towards the customer, serve a drink - or just do nothing," de Ruiter said.
"Customers wish to place an order if they stand near the bar and look at the bartender. It is irrelevant if they speak," said Sebastian Loth, co-author of the study.
"This eye contact is a visual handshake. It opens a channel such that both parties can speak," he said.
Once it is established that the customer wishes to place an order, the body language becomes less important.
"At this point, the participants focussed on what the customer said. For example, if the camera lost the customer and the robot believed the customer was "not visible," the participants ignored this visual information," Loth said.