5G is often hyped as point technology for the new edge – but the edge also must be inclusive of a wide basket of traditional networking and IT technologies, including LTE, unlicensed wireless (often referred to “private wireless”), industrial equipment, and traditional enterprise networking. 5G technology has been a driving force to rethink the edge because of its need for scale, which has caused service providers to the disaggregated cloud model for providing network, compute, and storage resources. , The edge itself must now be automated and abstracted to support the scale that will be needed with these new services.
Download the IDC Analyst ReportExploring the Benefits of Disaggregated, Cloud-Native Architectures for Telco Cloud Transformation
Key Needs of the New Network Edge
In my discussions with service providers and enterprises, it’s becoming apparent crucial needs are emerging as organizations build out the edge. On the top of the list, organizations would like to use compute resources located on premises or close to customers to deliver real-time data processing for latency sensitive applications. The most common example is edge-compute resources placed in retail location to provide quick access to data, or industrial automation requiring factories or warehouses to quickly access and process data on site.
Applications running at the edge will also need to access data and other services in private or public clouds – meaning that seamless networking with cloud resources is necessary. Edge systems must be built to optimize data access and compute tasks depending on specific requirements. The combination of edge compute and cloud compute enables businesses to maximize their overall operations.
Developers want to build edge applications using modern principles, such as continuous integration and continuous deployment (CI/CD). An important, early category of “killer apps” for edge compute will be the development and provisioning solutions that make the new edge experiences possible. Some of the potential killer apps emerging include realtime video analysis, augmented reality/virtual reality (AR/VR), smart cities, smart factories, and retail analytics.
All of this is coming to fruition with edge compute increasing in demand in more locations, gradually stepping deeper into the network and closer to subscribers at the same time. The question is, what is the best way to handle this new required compute power at the lowest cost? In some cases, some customers are looking to combine networking and edge compute by offloading compute power to network cards as a SmartNIC such as Network Processing Unit (NPU) or a data processing unit (DPU). But this also adds overhead and it can be expensive – it will only be done in cases where high-demand applications require the extra cost.
Likewise, deploying a mini server farm in such locations carries the same overhead of redundancy and management as in larger sites, so organizations will look at more innovative solutions. One such solution is to repurpose existing idle compute resources in the network itself to be consumed as a CPU pool and use it in order to expand the edge into numerous network location. This is one of the key innovations that DriveNets is bringing to market with the concept of running services as a disaggregated chassis (DDC).
The New Edge Architecture
All of this means that we are moving to a new edge architecture – one that is flexible, cost-effective, and connected to the cloud. Building out pervasive, distributed edge compute means providing hardware consistency, reliability, and cost efficiency. Edge systems must take the path of the hyperscale cloud buildout – low-cost commodity hardware combined with high-powered software automation and distributed orchestration. Any edge device must connect to any cloud or any data center using a fully distributed, software-controlled network. And the system must be capable of finding compute power wherever it makes sense.
Traditional enterprise networking approach won’t work here – and neither will traditional service-provider infrastructure. Enterprises and services providers are converging around flexible and scalable cloud-native approach: Cloud native functions (CNFs) that can leverage the cloud model.
Here is what the new edge requirements look like:
- Automated orchestration. Powerful automation software is needed to deploy thousands of new devices, connected by technologies such as 5G, autonomous vehicles, and industry digital transformation initiatives. Changes will need to be orchestrated with rapid software automation.
- Open hardware. The new edge architecture should be open, modular, and based on commercial off-the-shelf hardware. Combined network and compute devices can come in small form factors that can be deployed either directly at the edge or scaled together in clusters in the cloud. The important thing is that the hardware is standardized, economical, and open. Large, proprietary hardware installations will be shunned in favor of more dynamic, open hardware that is modular in nature and can be scaled on demand by simply adding more hardware.
- Distributed compute. Software and automation will balance the need for compute power and determine where it’s most economical. Organizations won’t necessarily be deploying new edge server farms to fuel all the services, because the edge has multiple cost vectors such as real estate, power, and capex. In many cases, organizations will seek to minimize new capex by looking for ways to leverage existing edge compute resources – for example by creating an abstraction of all CPU resources using a cloud-based platform.
- Network/compute convergence. The new network edge offers the opportunity to build a more flexible hardware (white boxes) that can be configured to run many different functions or services with and disaggregated software. In addition, compute and network resources can be combined do conserve valuable real estate and power resources. This will drive service providers and enterprises away from ore specialized appliances focused on single tasks, such as traditional firewalls or load balancers.
Initially conceptualized with the belief that low-latency needs would drive edge compute, Futuriom believes the real advantage of network edge is the flexibility and automation provided in deploying new services wherever they are needed, using standardized and affordable software with cloud-based networking and automation software.
Many network edge projects have failed because of efforts to deploy proprietary platforms or complicated architectures and high costs. Following the cloud will be key: Leveraging existing popular cloud-native technologies such as Kubernetes, APIs, and public cloud services and extending them out to the edge.
Rest assured – the new network edge is taking shape. But the most successful networking and compute edge deployments will have the characteristics of solving the challenges by simultaneously delivering new applications capabilities and radically lowering the cost of deploying these technologies.
IDC Analyst Report
Exploring the Benefits of Disaggregated, Cloud-Native Architectures for Telco Cloud Transformation