Business Standard

Now, computer that can read your body language

Image

Press Trust of India Washington
Scientists have developed a computer that understands the body movements of multiple people from a video in real time, including the pose of each individual's fingers.

This ability to recognise poses will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things, researchers said.

Researchers at Carnegie Mellon University in the US developed a new method at the Panoptic Studio, a two-story dome embedded with 500 video cameras.

The insights gained from experiments now make it possible to detect the pose of a group of people using a single camera and a laptop computer, researchers said.
 

Yaser Sheikh, associate professor at Carnegie Mellon University, said these methods for tracking two dimensional (2D) human form and motion open up new ways for people and machines to interact with each other, and for people to use machines to better understand the world around them.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted.

A self-driving car, for instance, could get an early warning that a pedestrian is about to step into the street by monitoring body language, researchers said.

Enabling machines to understand human behaviour also could enable new approaches to behavioural diagnosis and rehabilitation for conditions such as autism, dyslexia and depression, they said.

"We communicate almost as much with the movement of our bodies as we do with our voice. But computers are more or less blind to it," Sheikh said.

In sports analytics, real-time pose detection will make it possible for computers not only to track the position of each player on the field of play, as is now the case, but to also know what players are doing with their arms, legs and heads at each point in time.

The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the scientists have released their computer code for both multiperson and hand-pose estimation.

Tracking multiple people in real time, particularly in social situations where they may be in contact with each other, presents a number of challenges.

Simply using programmes that track the pose of an individual does not work well when applied to each individual in a group, particularly when that group gets large.

Sheikh and his colleagues took a bottom-up approach, which first localises all the body parts in a scene - arms, legs, faces, etc - and then associates those parts with particular individuals.

Disclaimer: No Business Standard Journalist was involved in creation of this content

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Jul 07 2017 | 5:07 PM IST

Explore News