The latest MLPerf Inference v5.1 results have once again highlighted the rapid evolution of AI infrastructure, and QCT has submitted a powerful lineup of its QuantaGrid servers designed to meet the demands of modern AI and HPC workloads. From enterprise-friendly air-cooled systems to massive GPU-accelerated platforms, QCT’s QuantaGrid offerings demonstrate versatility, scalability, and performance leadership.

QuantaGrid D75E-4U is a standout in two configurations:
- With a configuration of only Intel CPUs, the D75E-4U server can be optimized for enterprises seeking a balance between compute power and energy efficiency. Supporting dual Intel® Xeon® 6 processors, it is capable of delivering robust performance for training tasks.
- The D75E-4U can also support up to eight NVIDIA Blackwell Ultra GPUs, and with this configuration it can be ideally used for scalable AI and HPC workloads. Its low-power and modular design also ensures operational efficiency without compromising on acceleration capabilities.

QuantaGrid D74H-7U is a powerhouse for organizations tackling large-scale AI workloads:
- Equipped with 8x NVIDIA H200 SXM5 GPUs
- Featuring NVIDIA NVLink™ for ultra-fast GPU-to-GPU communication
- Supports GPUDirect Storage for low-latency data access
This 7U system is engineered for demanding AI training tasks, including generative AI and large language models (LLMs), offering unmatched throughput and scalability.

QuantaGrid D75T-7U brings AMD’s latest innovations to the forefront:
- Powered by 2x AMD EPYC™ 9004/9005 Series processors
- Supports 8x AMD Instinct™ MI325X GPUs
This server is built for massive AI models and scientific computing, delivering exceptional performance for both training and inference. It’s one of the most powerful platforms available for enterprises pushing the boundaries of generative AI and HPC.
By participating in MLPerf Inference v5.1, QCT continues to demonstrate its commitment to transparency, performance benchmarking, and innovation in AI infrastructure. These results not only validate the capabilities of QCT’s server platforms but also provide enterprises with trusted data to guide their infrastructure investments.
Whether you’re building a scalable AI data center or deploying high-performance computing clusters, QCT’s MLPerf-validated systems offer the reliability and power needed to stay ahead in the AI race. For MLPerf Inference results visit: https://mlcommons.org/benchmarks/inference-datacenter/.For more information about tested systems visit www.QCT.io.