Computational neuroscientists wanted to investigate neural networks trained with little or no human-labeled data. This type of training is known as self-supervised learning. It has proven extremely effective in modeling human language and, more recently, image recognition. According to neuroscientists, artificial neural networks appear to reveal some of the actual methods our brains use to learn.

Robot Artificial Intelligence Woman
(Photo: Gerd Altmann/Pixabay )
Robot Artificial Intelligence Woman

Data Labeling in Artificial Intelligence

Data labeling has been a practice in artificial intelligence for a decade to train an artificial neural network to distinguish different images of the same type correctly. For example, a cat can be labeled as a tiger cat and the other as a tabby cat to distinguish the two. This training is called supervised training, which humans and animals don't undergo because they don't use labeled data sets to learn.

According to Quanta Magazine, supervised learning may be limited in its ability to reveal information about biological brains. Humans and animals don't rely on labeled data sets. Instead, they explore the environment on their own for the most part, and as a result, they gain a rich and robust understanding of the world.

In fact, as the field matured, researchers recognized the limitations of supervised training. For instance, computer scientist Leon Gatys and his colleagues at the University of Tübingen in Germany at the time took a picture of a Ford Model T and then added a leopard skin pattern to it, creating an odd but instantly recognizable image. A leading artificial neural network correctly identified the original image as a Model T, but the modified image was incorrectly identified as a leopard. It had become fixated on the texture and had no idea what a car looked like or a leopard, for that matter.

Some neuroscientists see echoes of how we learn in systems like this. "There's no doubt that 90% of what the brain does is self-supervised learning," Blake Richards, a computational neuroscientist at McGill University and the Quebec Artificial Intelligence Institute Mila, said.

ALSO READ: AI (Artificial Intelligence) Bot GPT-3 Finished a 500-Word Academic Thesis

Artificial Intelligence Similarity to Human Brain Research Findings

Richards and his colleagues developed a self-supervised model that hints at an answer. They trained an AI that combined two different neural networks: the ResNet architecture and a recurrent network. The ResNet architecture was designed for image processing, while the recurrent network keeps track of a sequence of prior inputs to predict the next expected input. The study was published on BiorXiv.

To train the combined AI, the team began with a sequence of ten frames from a video and let the ResNet process each individually. The recurrent network predicted the latent representation of the 11th frame rather than just matching the first ten frames. When the self-supervised learning algorithm compared the prediction to the actual value, it instructed the neural networks to adjust their weights to improve the prediction.

Richards' team discovered that a single ResNet-trained AI was good at object recognition but not movement classification. When they divided the single ResNet into two pathways without changing the total number of neurons, the AI developed representations for objects in one and movement in the other, allowing downstream categorization of these properties, much like our brains do.

To put the AI to the test, the researchers showed it a series of videos that had previously been shown to mice. These animals, like primates, have brain regions specialized for static images and movement. The Allen researchers recorded the neural activity in the mouse visual cortex as the animals watched the videos.

Richards' team discovered similarities in how the AI and living brains reacted to the videos. During training, one of the artificial neural network's pathways resembled the ventral, object-detecting regions of the mouse's brain, while the other resembled the movement-focused dorsal regions.

According to Richards, the findings indicate that our visual system has two specialized pathways that help predict the visual future; a single pathway is insufficient. Computational neuroscientists wanted to investigate neural networks trained with little or no human-labeled data. This type of training is known as self-supervised learning. It has proven extremely effective in modeling human language and, more recently, image recognition. According to neuroscientists, artificial neural networks appear to reveal some of the actual methods our brains use to learn.

 

RELATED ARTICLE: China Uses Artificial Intelligence (AI) to Run Courts, Supreme Justices; Cutting Judges' Typical Workload By More Than a Third and Saving Billion Work Hours

Check out more news and information on Technology in Science Times.