What makes computers and a human brain different when it comes to recognising images? The presence of an "atomic" unit of recognition - a minimum amount of information an image must contain for the recognition to occur - in human brain, researchers report.
Scientists from the Weizmann Institute of Science in Israel and the Massachusetts Institute of Technology (MIT) suggests that there is something elemental in our brains that is tuned to work with a minimal amount of information.
That elemental quantity may be crucial to our recognition abilities and incorporating it into current models can prove valuable for further research into the workings of the human brain and for developing new computer and robotic vision systems.
To understand this, professor Shimon Ullman and Dr Daniel Harari, together with Liav Assif and Ethan Fetaya, enlisted thousands of participants from Amazon's "Mechanical Turk" (AI programme) and had them identify a series of images.
When the scientists compared the scores of the human subjects with those of the computer models, they found that humans were much better at identifying partial- or low-resolution images.
Also Read
Almost all the human participants were successful at identifying the objects in the various images up to a fairly high loss of detail - after which, nearly everyone stumbled at the exact same point.
"If an already minimal image loses just a minute amount of detail, everybody suddenly loses the ability to identify the object," Ullman noted.
"That hints that no matter what our life experience or training, object recognition is hardwired and works the same in all of us," he added in a paper published in the journal Proceedings of the National Academy of Sciences (PNAS),
The researchers suggest that the differences between computer and human capabilities lie in the fact that computer algorithms adopt a "bottom-up" approach that moves from simple features to complex ones.
Human brains, on the other hand, work in "bottom-up" and "top-down" modes simultaneously, by comparing the elements in an image to a sort of model stored in their memory banks, the authors noted.