Machine learning is taking giant leaps at an unprecedented pace to advance innovations. The results of MLPerf Training v3.1, the latest round of MLPerf Training and HPC Benchmark, show 49X performance gains in just 5 Years. As a member of MLCommons, QCT also contributed to this progress with two submissions in the closed division. QCT’s submissions included tasks in Image Classification, Object Detection, Natural Language Processing, Speech Recognition, and Recommendation, all of which were successfully achieved by meeting the prescribed quality targets (see below) using its QuantaGrid D54U-3U and QuantaGrid D74H-7U.
|Reference Implementation Model
|Latest Version Available
|Image segmentation (medical)
|0.908 Mean DICE score
|Object detection (light weight)
|Object detection (heavy weight)
|0.377 Box min AP and 0.339 Mask min AP
|0.058 Word Error Rate
|0.72 Mask-LM accuracy
|Criteo 4TB multi-hot
Fig. 1. MLPerf Training v3.1 benchmarks that QCT submitted
The QuantaGrid D74H-7U is an 8-way GPU server equipped with the NVIDIA HGX H100 8-GPU Hopper SXM5 module, making it an ideal choice for compute-intensive AI training. With innovative hardware design and software optimization, the QuantaGrid D74H-7U server consistently delivers cutting-edge performance in training results.
The QuantaGrid D54U-3U, powered by 4th Gen Intel Xeon Scalable processors, is a 3U system featuring the capacity to accommodate up to four dual-width accelerator cards or up to eight single-width accelerator cards, along with 32 DIMM slots. This provides a comprehensive and flexible architecture that can be tailored to optimize various AI/HPC applications. In this round, the QuantaGrid D54U-3U Server, configured with four NVIDIA H100-PCIe-80GB accelerator cards, achieved outstanding performance.
QCT remains committed to delivering comprehensive hardware systems, solutions, and services to academic and industrial users. Moreover, we are dedicated to maintaining transparency by openly sharing our MLPerf results with the public, encompassing both training and inference benchmarks.For more detailed information, visit the official MLPerf Training results.