Los Alamos National Laboratory researchers devised a novel method for comparing neural networks to understand neural network behavior better. This new method may aid researchers in understanding the underlying mathematics of AI. The research also includes training for the network's improvement.

Artificial Neural Network with Chip
(Photo: mikemacmarketing / original posted on flickrLiam Huang / clipped and posted on Flickr/via Wikimedia Commons )
Artificial Neural Network with Chip

Understanding Neural Network Behavior Through Research

According to Haydn Jones, a researcher in Los Alamos' Advanced Research in Cyber Systems group, the artificial intelligence research community does not always understand what neural networks are doing. He claims that it produces good results, but they have no idea how or why it works.

This method aims to peer inside the black box of artificial intelligence. It will also aid in delving deeper into the datasets recognized by the neural network.

The study, "If You've Trained One, You've Trained Them All: Inter-Architecture Similarity Increases With Robustness," can be found at Open Review. The study is important not only for studying network similarity but also for characterizing the behavior of robust neural networks.

What are Neural Networks?

According to IBM, neural networks are a subset of machine learning at the heart of deep learning algorithms. The human brain inspires their names and structure. It mimics the way biological neurons communicate with one another.

According to Science Direct, a neural network comprises several simple processing elements known as neurons. Each neuron is linked to at least one other neuron and possibly the input nodes. Neural networks offer a straightforward computing paradigm for performing complex recognition tasks in real-time.

Neural networks are high-performance. They are, however, known for being fragile. In an ideal world, self-driving cars powered by neural networks would be able to detect signs with ease. But when the sign has a sticker on top, a possible misidentification may occur. It is possible that the car will not come to a complete stop.

ALSO READ: Nanomagnets Developed to Construct New Neural Network Model for AI-Based Simulations


How to Improve Neural Networks

One possible solution for improving neural networks is to improve network robustness. One method is to attack networks while they are being trained. The researchers purposefully introduced inconsistencies in this study. The AI is then trained to ignore them. The researchers dubbed it "adversarial training" to make it more difficult to fool the networks.

As the magnitude of the attack increases, the training causes neural networks in the computer vision domain to converge to similar data representations. In this training, the network architecture does not matter. Jones stated that they discovered that when neural networks are trained to be robust against adversarial attacks, they begin to do the same things.

There has been a lot of work done in the industry and academia to find the best neural network architecture. However, the Los Alamos team's findings indicate that additional adversarial training significantly narrows this search space. As a result, because adversarial training causes diverse architectures to converge to similar solutions, the AI research community may not need to spend as much time exploring new architectures.

Jones added that discovering similarities between robust neural networks makes understanding how robust AI might work easier. He believes they may uncover clues about human and animal perceptions.

 

RELATED ARTICLE: Neural Network Solves Complex Calculus System; Rational Approach of Deep Learning Connects Machine and Human Language

Check out more news and information on Artificial Intelligence in Science Times.