For most people, using a computer is limited to clicking, typing, searching, and, thanks to Siri and similar software, verbal commands.
"Compare that with how humans interact with each other, face to face - smiling, frowning, pointing, tone of voice all lend richness to communication," researchers said.
The new project titled "Communication Through Gestures, Expression and Shared Perception," aims to revolutionise everyday interactions between humans and computers.
"Current human-computer interfaces are still severely limited," said Professor Bruce Draper, from Colorado State University (CSU), who is leading the project.
More From This Section
The team has proposed creating a library of what are called Elementary Composable Ideas (ECIs).
Like little packets of information recognisable to computers, each ECI contains information about a gesture or facial expression, derived from human users, as well as a syntactical element that constrains how the information can be read.
To achieve this, the researchers have set up a Microsoft Kinect interface. A human subject sits down at a table with blocks, pictures and other stimuli.
"We don't want to say what gestures you should use," Draper said.
"We want people to come in and tell us what gestures are natural. Then, we take those gestures and say, 'OK, if that's a natural gesture, how do we recognise it in real time, and what are its semantics? What roles does it play in the conversation? When do you use it? When do you not use it?'" Draper said.
Their goal: making computers smart enough to reliably recognise non-verbal cues from humans in the most natural, intuitive way possible.
The project, which falls broadly under Defence Advanced Research Projects Agency (DARPA)'s basic research arm, is focused on enabling people to talk to computers through gestures and expressions in addition to words, not in place of them, researchers said.