Researchers from various countries have developed a device that can run multiple AI applications and do calculations directly in memory. The newly designed chips will use a small portion of the energy used by current general-purpose AI computing platforms. The research team, which includes bioengineers from the University of California, San Diego, discusses its findings in Nature.

Compute-in-Memory NeuRRAM Neuromorphic Chip Features 

With the NeuRRAM neuromorphic chip, artificial intelligence (AI) is one step closer to being able to run on a variety of edge devices that are not connected to the cloud, enabling them to carry out complex cognitive activities at any time and anywhere without requiring a network connection to a centralized server. 

Smart watches, VR headsets, smart earbuds, smart sensors in factories, and rovers for space exploration are just a few applications available worldwide. 

Compared to the most advanced compute-in-memory chip, the NeuRRAM device uses half as much energy. It produces equally accurate results from traditional digital processors and is a novel class of hybrid circuits executing calculations in memory.

Compute-in-memory (CIM) is a computing paradigm that addresses the memory-wall issue in the design of deep learning hardware accelerators, according to IEEE Circuits and Systems.

Conventional AI platforms are significantly more extensive, complex and constrained to employing big data servers running in the cloud.

In contrast, the NeuRRAM device supports a wide range of neural network models and architectures and is incredibly flexible. As a result, the chip can be applied to a wide range of tasks, such as voice recognition, image recognition, and reconstruction.

Current AI Computing Device Issue

At the moment, AI computing is both power-hungry and expensive. The majority of edge device AI applications require sending data to the cloud, where the AI processes and analyzes it. The outcomes are then transferred back to the apparatus. That's because most edge devices are battery-powered, limiting the amount of power that can be used for computation.

This NeuRRAM chip could result in more reliable, intelligent, and usable edge devices as well as more intelligent manufacturing by lowering the power consumption required for AI inference at the edge. The increased security concerns associated with transferring data from devices to the cloud may potentially result in enhanced data privacy.

One significant bottleneck on AI chips is data transfer from memory to computing units.

"It's the equivalent of doing an eight-hour commute for a two-hour work day," Wan said.

Scientists Solve Data Transfer Issue 

Researchers employed resistive random-access memory, a kind of non-volatile memory that enables processing directly within memory rather than in separate computing units, to address this data transfer issue.

CPU Chip
(Photo : Bruno/Germany/Pixabay)
CPU Chip

ALSO READ: MIT Engineers Design a Minidrone Computer Chip That Is Low Power and Uses Less Processing Power but Still More Efficient


Wan's advisor at Stanford, Philip Wong, developed RRAM and other cutting-edge memory technologies that are now employed as synapse arrays for neuromorphic computing. Philip Wong's lab was a major contributor to this work.

Although using RRAM devices for computation is not particularly new, it typically results in a loss of precision and a lack of flexibility in the chip's architecture.

NeuRRAM Chip Performance

Researchers used a metric called the energy-delay product (EDP) to gauge the chip's energy efficiency. EDP combines the time required for each task with the energy used to perform that action. According to Science Direct, EDP is calculated as the product of delay (execution time) and energy. With the aim of creating both low-energy and quick-runtime apps, this measure is utilized to prioritize application runtime.

By this standard, the NeuRRAM chip outperforms state-of-the-art semiconductors by having an EDP that is 1.6 to 2.3 times lower (lower is preferable) and a computational density that is 7 to 13 times greater.

There were a variety of operations executed on the chip. It was 99% accurate when recognizing handwritten numbers, 85.7 percent when classifying images, and 84.7 percent when recognizing Google speech commands. Additionally, the chip reduced picture reconstruction errors on an image recovery test by 70%. These results are on par with current digital processors that operate with the same level of bit precision but use significantly less energy.

 

RELATED ARTICLE: Flexible Computer Processor for IoT Prints Circuits onto Paper, Cardboard, Cloth

Check out more news and information on Technology in Science Times.