Quanta Cloud Technology (QCT) has once again demonstrated its leadership in AI infrastructure by delivering outstanding results in the latest MLPerf Training v5.0 benchmark suite, released by MLCommons. This round marks a significant milestone in AI benchmarking, with a record number of submissions and the introduction of the most demanding AI workloads to date.
In this round of MLPerf Training v5.0 submissions, QCT submitted results across several key benchmarks (i.e., stable diffusion, Llama 2.0 70B LoRA, and RetinaNet), showcasing improvements in both efficiency and scalability for two of their latest 7U QuantaGrid server systems:
- QuantaGrid D74H-7U: Designed for large-scale AI workloads, this powerful 7U system supports dual 5th/4th Gen Intel® Xeon® Scalable processors and eight NVIDIA HGX™ H200 SXM5 GPUs. Purpose-built to tackle the most complex AI and HPC workloads, the D74H-7U features the NVIDIA HGX™ H200 and supports nonblocking GPUDirect® RDMA and GPUDirect® Storage.
- QuantaGrid D75T-7U: Built for massive AI models and equipped with dual AMD EPYC™ 9005 Series processors, this system supports eight AMD Instinct™ MI325X GPUs. The D75T-7U is engineered for unparalleled AI training efficiency. With eight x16 PCIe® Gen 5 host I/O connections and AMD Infinity Fabric™ mesh interconnect between GPUs, the system delivers high bandwidth and low latency—eliminating data transfer bottlenecks in large-scale model training.
The MLPerf Training v5.0 results display QCT’s engineering excellence and its ability to deliver optimized AI infrastructure for enterprise and research applications. By participating in MLPerf, we ensure our solutions are transparent and enable customers to make informed decisions based on reliable benchmarking.
As AI continues to evolve, QCT remains committed to delivering the infrastructure that powers tomorrow’s breakthroughs. Whether it’s training trillion-parameter models or deploying AI at the edge, QCT is ready. You can find the latest MLPerf Training v5.0 results here: https://mlcommons.org/benchmarks/training/