The human brain can categorize data using 1 percent or even less of the original information. With this discovery, a team from Georgia Tech wants to come up with a mathematical definition of such process.

"We hypothesized that random projection could be one way humans learn," Rosa Arriaga, one of the team members, said. "The short story is; the prediction was right. Just 0.15 percent of the total data is enough for humans."

The researchers already have a smart algorithm that explains the how humans learn. This particular method is also used in data analysis, computer vision and, not surprisingly, machine "learning."

"How do we make sense of so much data around us, of so many different types, so quickly and robustly?" said Santosh Vempala, Distinguished Professor of Computer Science at the Georgia Institute of Technology and one of four researchers on the project.

The researchers had test subjects that were asked to look at original and abstract images. They were then challenged to identify the same images on small portions. The researchers came up with an algorithm based on random projection. The random projection technique compresses in a certain way. This technique compresses information in a certain way with accuracy being sacrificed for the speed.

"We were surprised by how close the performance was between extremely simple neural networks and humans," added one of the researchers, Santosh Vempala.

The results of the study are just a plausible explanation of the brain process. It is not yet proven that the brain does use the random projection method in processing information. The algorithmic theory has been cited many times over and, as mentioned, being used in the industry for handling large data of diverse types by machines.