The Nvidia H100 ranks first in all neural network categories in the MLPerf Benchmark. It outperforms both its predecessor, the A100, and its competitors.

(Photo: StockSnap/Pixabay)
Laptop NVIDIA Keyboard Technology

What is MLPerf Benchmark?

MLPerf is a group of AI leaders who want to create fair and useful benchmarks. The benchmark establishes streamlines industry selection processes by providing unbiased evaluations of interference performance for hardware, software, and services.

It is the first attempt to compare the capabilities of computers in training and inferencing neural networks. Image classification (ResNet 50 v1.5), speech recognition (RNN-T), 3D medical imaging (3D U-Net), natural-language processing (BERT Large), object detection (RetinaNet), and recommendation (DLRM) are among the neural networks tested. These networks had already been trained on a set of standard data, and they had to make predictions based on data they had never seen before.

The machines taking part in the tests are assessed in server mode and offline mode. According to Tom's Hardware, vendors may submit results obtained under two conditions: closed category and open category. The closed category must use neural networks that are mathematically equivalent. On the other hand, the open category can modify them to optimize them for their hardware.

NVIDIA H100 Tensor Core MLPerf Benchmark

NVIDIA H100 Tensor Core GPUs set world records in all workloads for inference. According to NVIDIA's website, the H100 GPU outperforms the A100 by up to 4.5 times. According to the results, it is the best option for users who want the best performance on advanced AI models.

The H100, also known as Hopper, has raised the bar in per-accelerator performance and supports all six neural networks. It leads to both throughput and speed in a separate server and offline scenarios.

Because of its Transformer Engine, Hopper outperformed the popular BERT model for natural language processing. It is one of the largest and most performance-demanding MLPerf AI models.

The H100 GPUs will be available later this year, but they will participate in future MLPerf training rounds.

NVIDIA H100 Predecessor

NVIDIA A100 GPUs are H100's predecessor. It is available today from major cloud service providers and systems manufacturers. Although H100 outperformed it, it continued to show overall leadership in mainstream performance on AI inference in the latest tests.

A100 GPUs outperformed all other submissions in the data centers, edge computing, and other categories and scenarios. In June, it also achieved overall leadership in MLPerf training benchmarks, demonstrating its capabilities across the AI workflow. A100 GPUs have improved their performance by 6x since their debut on MLPerf in July.

ALSO READ: NVIDIA Omniverse and the Important Role It Plays in All Metaverses


NVIDIA H100 Competitors

The BR104 from Biren Technology shows much promise in image classification (ResNet-50) and natural language processing (BERT-Large) workloads. If the BR100 is twice as fast as the BR104, it will outperform Nvidia's H100 in image classification workloads when measured per-accelerator.

Sapeon's X220-Enterprise and Qualcomm's Cloud AI 100, on the other hand, cannot compete with Nvidia's A100, which was released about two years ago. Intel's 4th Generation Xeon Scalable Sapphire Rapids processor can run AI/ML workloads, but it does not appear that the code has been optimized sufficiently for this CPU, resulting in poor results.

 

RELATED ARTICLE: Artificial Intelligence Made Faster, More Efficient: New NVIDIA Processor Chip for AI Helps It to Better Understand Human Language 

Check out more news and information on Technology in Science Times.