Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Researchers from Cornell's Personal Robotics Lab have solved this problem.
Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities.
"We extract the general principles of how people behave. Drinking coffee is a big activity, but there are several parts to it," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research.
The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognise a variety of big activities, he explained.
More From This Section
"Even though humans are predictable, they are only predictable part of the time," Saxena said.
"The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond," he said.
Saxena will be joined by Cornell graduate student Hema S Koppula to present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.