Hyperscale Data Centers and Open Compute Platforms – That is what the 5G technology is demanding
Italian newspaper journalist Guiomar Parada and QCT’s Maurizio Riva met at this year’s Mobile World Congress (MWC) in Barcelona and started a conversation about the advantages, challenges and demands of 5G technology, and what QCT has to offer. The dialogue was continued in a telephone interview Guiomar carried out with Maurizio and his European team.
Please find here the English version of the written interview:
Guiomar Parada: Which developments led QCT to design and implement the first fully visualized and cloud-native mobile network?
Maurizio Riva: Telco operators have long been looking for this type of infrastructure and platform, which are optimized for performance and are built based on open technologies, to allow 5G applications and services. Thus, together with Intel, Rakuten and Red Hat we went on to develop a fully virtualized 5G mobile network, the first of this type. We took advantage of our experience in cloud computing to create a completely cloud-based and fully automated network, both for the network itself and for services.
What is the rational for big customers, like carriers getting ready for 5G, to invest in this kind of infrastructure?
For network operators moving towards optimizing and making services more efficient, in other words: hyperscale server centers*, open compute platforms and infrastructure standardization is not an option, it is the only option. If for nothing else, it is for cost reduction. Bear in mind that in today’s data centers, energy can account up to half of all expenditures. This is a reality, and we must therefore be absolutely exigent on this point. Carriers are one market that will contribute greatly to make [cloud computing] infrastructures more efficient. For this reason, we consider this market to be very important and strategic. In the cloud market, QCT was a disruptor, and that’s what we want to be also in the telco market.
So, energy efficiency is not only critical for telcos?
Yes. This is a trend in the server industry: as it moves towards ever greater density, it also seeks lower energy consumption and to make maintenance, among other things, ever faster and simpler. That applies also to the design of the servers with new solutions that make them more energy efficient.
Think about this: some of our customers have 50,000 server nodes. If you save even just one watt over 50,000 server nodes, the saved power can add up to as much as 50 kW per hour. We have a very rigid certification for efficiency: we use platinum-titanium class power supplies that make the server’s use of energy more efficient.
Fan cooling is very important. Our certification guarantees that each and any of the fans does not exceed an established percentage of the server’s total consumption.
This all reflects directly on the design of the servers. I was discussing this an hour ago with a client. Too many cables, for example, hinder the air flow and require the fans to spin faster. Fewer cables, vice versa, allow the fans to run at a lower speed and use less power. At Quanta we think about this very carefully, because the efficiency in power consumption is critical. Many cloud service providers for the telecommunications industry are already benefiting from a design that carefully plans, for instance, with the criteria of energy efficiency, where to place the cables. You could have them all on the back of the cabinets or all in front of them just to avoid complex architectures, because cables overcrossing each other in the cabinet interfere with the flow of air.
The other aspect that we consider an important trend is liquid cooling. Here too, we are constantly working toward solutions for the CPUs and GPUs. These are the types of servers that cloud service providers are increasingly using, and not just for their artificial intelligence workloads. In the latter case, since the amount of heat that the servers must dissipate is high, precisely because of the high concentration of performance inside the server, sometimes it is necessary to cool them with liquid.
I have recently seen the case – that’s Metro de Madrid – that is using AI to optimize the functioning of the cooling fans in regard to power saving. Is this a trend too?
Absolutely, this is the path and what we see as a trend in general—and in our case, specifically, in the cloud service provider industry. Today’s cloud service providers are offering instances of artificial intelligence. These are nothing more than algorithms that optimize functionality. These could be sensors that analyze the temperature and humidity of the air and then calculate the speed at which the fans need to turn at a given time. It could be done with AI deep learning.
Do you see an evolution in which new operators with operations fully based on the cloud have an advantage over legacy ones because they operate with much higher efficiency?
When we talk about 5G, a first hurdle is the license – and we see that several auctions are taking place in Europe and elsewhere. Then there is the hurdle of installing the antennas. Here, legacy operators have the advantage of having already made the investments for the license and for the antennas’ infrastructure. A few of them are even closing mutual agreements to share their antennas.
Today’s big discussion in the telecommunications sector is how to preserve investments and, at the same time, how to plan for cost savings in the future. Considering the latter aspect, I am sure that a company using solutions like ours grants itself great opportunities.
How do you see telcos managing globally the evolution of the 5G infrastructure?
First, we see that on the world market, China and the US are going faster than Europe, perhaps due to the fragmentation of nations and the fragmentation of carriers.
It is inevitable that among traditional carriers there will be some resistance, because we are talking about completely overhauling the infrastructure to reduce costs. In Europe, on top of this, they are also bound by laws and labor regulation. It is not easy.
5G substantially increases the need for processing data near the source…
Think about autonomous driving or the fact that more and more devices will be distributed, and will therefore need a concentration of local computing capacity to avoid having to relay big quantities of data towards the computing core risking bottlenecks in the network. And you need low latency. For this reason, you work more and more on the edge. This is a very important aspect of innovative 5G platforms. The world’s first infrastructure based entirely on the cloud for 5G** networks, and NGCO (Next Generation Central Office), which also won the Computex 2019 Best Choice Award in May in Taipei, Taiwan, provide a fiber-rich edge design to provide agile mobile networks and the associated infrastructure and services. Telco service providers can now transform their edge networks for faster service, efficiency and flexibility, thinking ahead to the scale of devices the Internet of Things (IoT) will connect.
* Conventionally data centers with more than 5,000 servers and over 10,000 square feet.
** TheNext Generation Central Office (NGCO) Solution is based on Intel® technology.