"Any TV these days is capable of 3D. There's just no content. So we see that the production of high-quality content is the main thing that should happen," said Wojciech Matusik, an associate professor at Massachusetts Institute of Technology (MIT) in US and one of the system's co-developers.
The system is the result of a collaboration between Qatar Computing Research Institute (QCRI) and MIT's Computer Science and Artificial Intelligence Laboratory.
Today's video games generally store very detailed 3D maps of the virtual environment that the player is navigating.
When the player initiates a move, the game adjusts the map accordingly and generates a 2D projection of the 3D scene that corresponds to a particular viewing angle. The researchers essentially ran this process in reverse.
Also Read
They set a very realistic soccer game to play over and over again, and used a video-game analysis tool PIX to continuously store screen shots of the action. For each screen shot, they also extracted the corresponding 3D map.
Then they stored each screen shot and the associated 3D map in a database.
For every frame of 2D video of an actual soccer game, the system looks for the 10 or so screen shots in the database that best correspond to it.
Then it decomposes all those images, looking for the best matches between smaller regions of the video feed and smaller regions of the screen shots.
The result is a very convincing 3D effect, with no visual artifacts.
Currently, the researchers said, the system takes about a third of a second to process a frame of video.