The robot can refill your coffee cup and hold the door open for you, in addition to performing several other tasks.
The robot developed at the The Personal Robotics Lab at Cornell University, has learned to foresee human action and adjust accordingly.
The robot was programmed to refill a person's cup when it was nearly empty. To do this the robot must plan its movements in advance and then follow the plan. But if a human sitting at the table happens to raise the cup and drink from it, the robot might pour a drink into a cup that isn't there.
Hema S Koppula, Cornell graduate student in computer science, and Ashutosh Saxena, assistant professor of computer science, will describe their work at International Conference of Machine Learning in June in Atlanta.
More From This Section
From a database of 120 3-D videos of people performing common household activities, the robot has been trained to identify human activities by tracking the movements of the body - reduced to a symbolic skeleton for easy calculation - breaking them down into sub-activities like reaching, carrying, pouring or drinking, and to associate the activities with objects.
"We extract the general principles of how people behave. Drinking coffee is a big activity, but there are several parts to it," said Saxena.
The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognise a variety of big activities, he explained.
Observing a new scene with its Microsoft Kinnect 3-D camera, the robot identifies the activities it sees, considers what uses are possible with the objects in the scene and how those uses fit with the activities.
In tests, the robot made correct predictions 82 per cent of the time when looking one second into the future, 71 per cent correct for three seconds and 57 per cent correct for 10 seconds. The robot also was more accurate in identifying current actions when it was also running the anticipation algorithm.