Researchers led by an Indian-origin scientist from Georgia Institute of Technology have discovered how humans can categorise data using less than percent percent of the original information.
They validated an algorithm to explain human learning -- a method that also can be used for machine learning, data analysis and computer vision.
"How do we make sense of so much data around us, of so many different types, so quickly and robustly?" said Santosh Vempala, distinguished professor of computer science.
"At a fundamental level, how do humans begin to do that? It's a computational problem," he asked.
Vempala and colleagues presented test subjects with original, abstract images and then asked whether they could correctly identify that same image when randomly shown just a small portion of it.
"We hypothesised that random projection could be one way humans learn," said Rosa Arriaga, senior research scientist and developmental psychologist
More From This Section
"The prediction was right. Just 0.15 percent of the total data is enough for humans," she added.
Next, researchers tested a computational algorithm to allow machines to complete the same tests.
Machines performed as well as humans, which provides a new understanding of how humans learn.
"We found evidence that, in fact, the human and the machine's neural network behave very similarly," Arriaga noted.
It is believed to be the first study of "random projection," the core component of the researchers' theory, with human subjects.
"We were surprised by how close the performance was between extremely simple neural networks and humans," Vempala said.
"This fascinating paper introduces a localised random projection that compresses images while still making it possible for humans and machines to distinguish broad categories," explained Sanjoy Dasgupta, professor of computer science and engineering at the University of California-San Diego.
The results were published in the journal Neural Computation (MIT press).