As data centers continue to evolve, innovation becomes a key to staying competitive. Quanta Cloud Technology (QCT) has taken a significant step forward by adopting the Data Center Modular Hardware System (DC-MHS). This move is set to revolutionize how data centers operate, bringing enhanced efficiency, scalability, and sustainability.
What is DC-MHS?
The Data Center Modular Hardware System (DC-MHS) is an initiative driven by the Open Compute Project (OCP) to standardize server hardware components. Traditional data centers were often customized, requiring significant investments and long-term planning, rather than using existing product designs. DC-MHS on the other hand allows for creating building blocks,or modules, that can be leveraged for future generations. This modular approach allows for greater flexibility and efficiency in data center operations. By using standardized hardware modules, data centers can easily scale their infrastructure, reduce costs, and minimize waste.
Benefits of DC-MHS
- Efficiency and Scalability: QCT’s DC-MHS solutions enable data centers to scale their operations seamlessly. Their modular designs allow for easy upgrades and expansions, ensuring that data centers can keep up with growing demands without significant overhauls.
- Sustainability: By adopting DC-MHS, QCT is contributing to more sustainable data center operations. The standardized hardware reduces waste and improves energy efficiency, aligning with global sustainability goals.
- Cost Reduction: The modular approach of DC-MHS helps in lowering infrastructure costs. Data centers can invest in only the necessary components and expand as needed, avoiding the high costs associated with traditional, monolithic systems.
QCT was an earlier adopter of DC-MHS and saw the adaptability and advantages of having a modular system that works across various industries. In our server designs, we embrace the DC-MHS architecture which is built around several standards, each focusing on different aspects of hardware design, including power, connectivity, and system integration.
Fig. 1 QCT’s DC-MHS product journey for the latest HPMs
QCT utilizes the following DC-MHS framework standards:
- M-FLW: Full Width host processor module (HPM) – Optimized for use with the full width of a 19” EIA-310 D Rack but can also accommodate larger 21” racks.
Fig. 2 QCT’s M-FLW dual socket design for Intel Xeon 6
- OCP DC-SCM: Secure Control Module for data center management – Specifies a SCM designed to interface to an HPM to enable a common management and security infrastructure across platforms within a data center.
Fig. 3 QCT’s use of DC-SCM from its predecessor up to version 2.0
- OCP NIC: Network Interface Card, for server communication – Specifies NIC card form factors targeting a broad ecosystem of NIC solutions and system use cases.
Fig. 4 QCT also has adopted OCP NIC from our 4th gen to 5th gen systems and onward
- M-CRPS: Modular Hardware System Common Redundant Power Supply—Specifies the power supply solutions and signaling expected to be utilized by DC-MHS compatible systems.
Fig. 5 QCT’s use of M-CRPS for better management telemetry on the power supply side
- M-DNO: Density Optimized HPM—Designed for different system configurations with partial width (i.e. ½ width or ¾ width) form factors.
Fig. 6 QCT can share the same M-DNO motherboard for different system architectures
For the latest M-SDNO HPM, or “Scalable DNO”, QCT is preparing for PCIe Gen 6 and compatibility for HPMs for 19” and 21” racks, and expands upon the existing DNO/FLW frameworks. It can fit different lengths as low as 305mm, 335mm, or up to 555mm to define common chassis intervals (CCI). This way we can design for the smallest HPM length/width class that can fit a given solution in a 1U, 2U, multi-node (2U2N or 2U4N), or AI server form factor.
Fig. 7 Adoption of M-SDNO for different CCIs and motherboard designs
Looking forward, QCT will continue to embrace DC-MHS as a cornerstone to future-proof our system designs, whether that be for Intel or AMD on the x86 side, or future ARM-based processors. With the new M-SDNO, the M-FLW and M-DNO aren’t going anywhere, but we are embracing the additional specification for flexibility of depth and scalability, and HPMs for 21” ORv3 racks. Meanwhile, we will still use M-FLW for 1U and 2U chassis with air and liquid cooling and M-DNO for broader ranges of chassis and server use cases to capitalize on system design and efficiency.