Every company is putting edge computing on its technology roadmap if it hasn’t been added already as we move to a more hyper-connected world. While many may think that its value only revolves around IoT, it has much wider applications when it comes to driving up customer experiences as they become more immersive, which also calls for processing raw data quickly and receiving it in real-time. Gartner has also forecasted that by 2021, 65 percent of global infrastructure service providers will generate 55 percent of their revenue through edge-related services that help customers create business moments at digital touchpoints. With that said, Gartner also noted that around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud, which will grow to 75% in 2022.
So, what is edge computing and where does it fit in the scheme of every company? What it entails is performing the computing and data processing remote from the data center, or so called “edge,” which is where the raw data is generated. And the result is that decision making performance becomes optimized while latency is minimized. But how does this differ from cloud computing you may ask? Well, while cloud computing is considered a centralized architecture, where all the data is aggregated back to a data center, edge computing functions as a distributed approach, where data processing and analytics are localized near the source of the data – which is easier to manage. And when it comes to IoT, every software as a service (SaaS) and data software company needs to gather sensor data, analysis, and aggregation in the cloud as selected summaries the same way that handle “big data.” Similarly, AR/VR, autonomous cars, and a wide range or market verticals need the same treatment in order to deliver insights as close to real-time as possible.
With edge computing, artificial intelligence (AI) capable servers are placed closer to the end-user where large volumes of data are being generated on a daily basis. When it comes to this type of streaming traffic, that is only increasing over time, the edge can alleviate the bottlenecks while also providing predictive maintenance. What is required are higher levels of AI processing than is currently available in most tech environments. In turn, smart cities can be enabled, and this goes hand-in-hand with AI growth in financial services, government, healthcare, education, the sky’s the limit. (no pun intended)
This technology is unlike the early days of content delivery networks (CDN) – placing nodes at locations closer to the end user, whose goal was to bring data closer to users. Edge computing puts the AI servers close to the data coming from users and sensors allowing immediate decision making – which is why we are seeing edge compute and AI services from all the major cloud vendors and on the minds of all the leading telecom companies. According to Forrester’s Analytics Global Business Technographics® Mobility Survey, 2018, 27 percent of global telecom decision makers, who responded, said that their firms are either implementing or expanding edge computing in 2019. What does this infer exactly? To put it bluntly, many of these vendors will require new wireless tools and updated skill sets to achieve this digital transformation which will relieve the load on computing resources that are centralized.
QCT currently goes beyond its data center territory by offering its own edge computing server, the QuantaGrid SD2H-1U. A highly flexible edge server designed for the resource constraints, an ultra-short 400mm chassis device allows. It features a revolutionary front access design for edge computing with flexible modules to fulfill different applications in a 1U form factor. Our inferencing server supports up to 2x single-width GPUs (NVIDIA T4 GPUs) or 1x dual-width GPU (NVIDIA V100), based on the NVIDIA EGX platform, which NVIDIA announced at Computex 2019 in Taipei. The EGX platform combines a range of NVIDIA AI technologies (e.g., NVIDIA CUDA-X) to deploy AI enhancements quickly and securely from the cloud to the edge. Our SD2H-1U server takes advantage of this accelerated computing platform that enables companies to harness AI at the edge, improving capabilities and performance on workloads such as CDN, as well as transportation, manufacturing, industrial, retail, healthcare, and agriculture markets. With 2x GPUs, the system is perfect for inference workloads such as media analytics or immersive media. Moreover, SD2H-1U can support up to 2x NVMe drives for cache acceleration, and also provides a -48VDC PSU option for flexible power support. The QuantaGrid SD2H-1U also fulfills Wi-Fi and LTE modules for uCPE applications. So, keep an eye out for this edge server as it is to be released in the coming months.
computing is certainly on the rise. Where the cloud typically comes to
play when organizations require storage and more computing power to execute
certain applications, edge computing, on the other hand, is used when cases
need lower latency and local autonomous actions, such as for autonomous car and
AR/VR. With the advent of self-driving cars and (industrial) IoT, there is an emphasis
on the local processing of information for instantaneous decision-making.
Therefore, putting these processes at the edge not only reduces backend traffic
that could result in bottlenecks, but also is the right choice for when data
has to be received in the blink of an eye. Hence, every company needs to start
thinking about what edge computing really means and where it fits in their
roadmap, as it can transform a variety of businesses. Its benefits are awaiting
to be realized.
Are you interested in learning more about how companies benefit from cloud and edge computing? Contact us.