Business Standard

Gesture recognition: Wave if you think it's the future

Simply move yours hands, without touching anything, to change the volume on your TV set or change channels. Is this the death of the remote control?

Ariha Setalvad Mumbai
Make a motion like you’re swatting away a fly – you just changed channels on your television. Wave your hands about and you just turned up or down the volume. By now, with the immense popularity of devices like the Wii and the Kinect, a lot of us are already familiar with gesture recognition software. So familiar in fact, that we don’t even think twice about how awesome this technology is.

Once labeled a fad, gesture recognition is here to stay. The Nintendo Wii was one of the first devices to introduced gesture recognition to the public - the combination of the Wii remote and the Wii sensor bar means the game console can detect a person’s movements, and map them onto a 3D computer generated world. Microsoft’s Kinect, not to be outdone, took things one step further. The Kinect’s camera allows the Xbox 360 to recognise and track individual joints of the human body. It also allows users to control the Xbox 360 menu with your hands instead of with a traditional controller.

Pretty cool but, as always, there’s a catch - they’re platform dependent, which means they run under only one operating system and run in only one series of computers. That’s where Fluid Motion comes in. Developed by brothers Raghav and Abhinav Aggarwal, founders of TruTech, Fluid Motion is gesture recognition software that is completely platform independent.

Winners of TechCrunch’s hackathon Disrupt New York 2012, the brothers are self taught coders, who only launched their gesture recognition software a few weeks ago on January 21, presenting it to potential clients such as Tata Starbucks, Deloitte, Titan Industries and Vodafone.

“It’s simply an infrared camera that comes with the Fluid Motion software,” says Abhinav. “You plug it in, install it and you’re ready to go. We’ve had zero problems with it so far.”

Once the camera is connected to the device of choice, via USB, the software is able to recognize the user’s gestures. The user is not required to wear or hold any devices and may expect Fluid Motion to work from up to a distance of 15 feet, making it ideal for large conference rooms.

Fluid Motion will work on any computer, projector or LED screen and boasts seamless gesture recognition with 3D integration. Say you’re in a car showroom - with Fluid Motion’s 3D integration feature, you can interact with the car on a screen, rotating it 360 degrees, opening and closing the doors and even viewing it from the inside. The Fluid Motion team is expected to begin work in conjunction with automobile giant Rolls Royce next month, building an app where consumers will be able to see 3D models of the cars and even customize them according to their needs.

While Fluid Motion is only Windows compatible so far, there is a Linux version coming in the next few days. The Aggarwals and their team are also working around the clock to incorporate other features such as voice recognition and a 3D model creator, via which users will be to create to-scale models, without any additional hardware. So when you’re caught standing in front of your computer waving your hands about like you’re conducting an invisible orchestra, you’re not crazy, you’re working.

To see how it works, check out this video

 

 


Successful integration of any technology involves minimal resource deployment, enabling short time-to-market in a cost effective manner. Fluid Motion seems easy enough to use but only time will tell if it will prove worthy of its 1.5 lakh implementation price tag.

So what’s the future of gesture recognition technology? Well, imagine you’re a neurosurgeon: you’ve started an operation, the patient’s brain is exposed in front of you and you need to see details of the patient’s 3D MRI on a computer. A traditional computer is out of the question since you’re in a sterile field. With gesture recognition, you can just wave your hand in the air to change the image, enlarge an area, or move to the right or left.

As more gesture-recognition technologies come to fruition, other touchless interface ideas are quickly replacing them in the category of "too futuristic to be possible." One of the most eagerly anticipated is headsets that will allow computers to be controlled by thought alone. This is still in the realm of imagination, but with touchless computing progressing so rapidly, the question is only, for how long will it remain there?

It will be interesting to see the unique challenges of designing for a gesture-controlled interface – consider that one would have to work with preset defined gestures, almost in the same way programmed buttons on a remote control limit interaction possibilities. It seems, for now, that the problem with implementing gesture recognition in a more widespread way is that computers are limited in the way they can process information. The raw data requires extensive filtering through complicated algorithms to make it usable. In order words, if you wanted to use gesture recognition in your day-to-day work, you might want to be prepared for a sneeze to accidentally prompt the system you’re using to delete half off your email inbox.


Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Mar 06 2013 | 1:56 PM IST

Explore News