From Core-out to Edge-in
Traditional network architecture was designed from the core and out. Services are centralized within the network’s core and accessed across the network via advanced connectivity infrastructure. This design has been fine for some applications, but it may not meet the needs of new emerging services that are becoming part of our lives—some of which are already here. Applications such as artificial intelligence, autonomous cars, video streaming, online gaming, and new IoT and other smart devices all are impacted by latency issues. Latency is influenced by how far the user is from the service point, meaning that services based on the network’s core can have higher latency compared to those positioned closer to the user, significantly affecting user experience.
Service endpoints typically depend on compute and storage resources, housed in large, centralized data centers. Transitioning from this centralized model to a decentralized approach, to support a distributed service model, requires these compute and storage resources to be relocated within the network’s metropolitan areas – EDGE.
The Challenge and the Opportunity
Is this another network evolution similar to what we have experienced every few years for the last three decades?
Well, no, this is a dramatic change that is already happening everywhere
In the past few years, services have become more user-focused. In parallel, cloud-native and real time performance oriented applications have been developed and are being planned to be launched also by utilizing the newly deployed 5G networks.
These applications are what’s known as “edge-native”, since they are developed in such a manner that they are close to end-users Today, metro edge points must accommodate a wide range of use cases, including business, mobile, and residential services. This requires supporting high-speed broadband, VPNs, mobile backhaul, cloud services, video streaming, and Generative AI. Additionally, we’re on the brink of introducing new business and industrial AI tools, advanced 5G IoT and smart devices, and virtual reality technologies. These applications are diverse, demanding real-time performance and high bandwidth. This toll falls on the edge, as the traditional core-out architecture cannot deliver the required performance.
Edge-in is not only the application
To make all this possible, services must “park” themselves at the network edge. From an operational aspect, the edge will need to shape itself to host such services. Unlike the centralized core-out model, the edge doesn’t necessarily have what it takes for this in terms of real estate, power budget, management and operational overhead, as well as a range of other small issues (ie, issues that are considered minor when aggregated in a centralized data center, but a huge burden once they are spread into hundreds or thousands of locations).
The metro is built to hold network elements, such as DriveNets Network Cloud, located in the network PoPs and acting as a high-capacity routing network node. Built as a cloud-native/edge-native function, it becomes the infrastructure for any of the incoming edge services that are set out to swamp network edges with in-network compute resources. Such setup eliminates any of the overheads involved with building micro data centers in the numerous network PoPs.
According to a report by Markets and Markets, the global edge computing market size is projected to grow at a Compound Annual Growth Rate (CAGR) of 15.7% from 2023 until 2028. The Drivers are here, the edge is already been adopted and is forcased to grow eavery year, now the question Is – areyour network ready to meet these workloads on the edge of town?
Or, in other words, is your network a DriveNets Network Cloud?
Is Your Network Ready to Meet Workloads on the Edge?
As more and more latency-sensitive applications emerge – networking and compute functions need to move towards the network edge – closer to the end user. This trend also optimizes traffic distribution across the network as content-delivery networks become more distributed and content instances also reside at the network edge. The new network edge requires tight integration of edge computing and edge networking. This means that the network edge should be able to efficiently handle applications that require computing resources, as well as meet a growing volume of traffic.
Transforming the Network at the Edge
To make all this possible, services must “park” themselves at the edge. From an operational aspect, the edge will need to shape itself to host such services. Unlike the centralized core-out model, the edge doesn’t necessarily have what it takes for this in terms of real estate, power budget, management, and operational overhead, as well as a range of other small issues (ie, issues that are considered minor when aggregated in a centralized data center, but a huge burden once they are spread into hundreds or thousands of locations).
The metro-edge was designed to hold network elements located in the network PoPs and act as a high-capacity routing network node. Building out the telco-edge with a cloud-native approach will provide the infrastructure for any of the incoming edge services that are set out to overwhelm network edges with in-network compute resources. Such a setup eliminates any of the overhead that would be involved in building micro data centres in the numerous network PoPs.
A Disaggregated, Cloud-Native Infrastructure Model
DriveNets Network Cloud addresses these challenges by combining networking and compute resources over a shared, cloud-like infrastructure that allows operators to put greater functionality at the network edge, even in sites with space and power limitations. This allows network and cloud operators to improve their service quality, enable new, latency-sensitive services and optimize the traffic distribution across their network.
Scale Efficiently and Cost-effectively at the Edge
The DriveNets approach to the telco-edge makes edge computing and edge networking highly scalable and cost/space efficient, allowing efficient scaling of the control plane independently of the data plane to support extensive VPNs at the edge.
Operators can avoid monolithic router interdependencies and quality issues by disaggregating the control & data planes through optimized hardware and software for each networking plane.
This cloud-native architecture is future proof and based on open standards, with Open Compute Project (OCP) DDC-compliant hardware that leverages a microservice architecture.
Operators are Embracing Disaggregated, Cloud-Native Architectures
More and more operators are embracing disaggregated, cloud-native architectures as part of their next-generation network deployments. DriveNets was recognized as one of the industry’s leading vendors for the Disaggregated Distributed Backbone Routing (DDBR) solutions by the operator-driven Telecom Infra Project (TIP). KDDI, also the chair of the TIP Disaggregated Open Router (DOR) subgroup, announced in 2022, the first TIP DDBR-compliant deployment and the first disaggregated IP infrastructure to be deployed in Japan.
Also in 2022, AT&T announced the transition of more than 52% of its core network to a disaggregated whitebox solution powered by DriveNets software. As the largest backbone network in the US, DriveNets Network Cloud software is running over multiple large clusters of white boxes in core locations of AT&T’s network.
AT&T’s distributed disaggregated chassis (DDC) design, which was built on Broadcom’s powerful Jericho2 family of merchant chips, aims to define a standard set of configurable building blocks on less costly service-class routers ranging from a single-line card systems to large, disaggregated chassis clusters.
Drivenets Network Cloud offers operators vendor diversity on all fronts, they can choose any ODM, Hardware, ASIC and Optics. Even the DriveNets network operation system can be replaced at any time without jeopardizing hardware investments. This flexibility enables operators to tailor their component choices for various edge locations based on their actual needs.
Moving to the Edge
The deployment of an open, disaggregated cloud-native routing platforms coupled with open and efficient long haul 400G\800G optical transport platform, such as ZR+, is exactly what large-scale operators require to manage the tsunami of demand that will be generated at the edge by services like 5G, fiber-based broadband, GenAI and entertainment content services, all while reducing the capex investment.
A disaggregated, cloud-native model will allow operators to launch and manage services much more quickly especially at the latency-sensitive telco-edge.
Additional Resources for Network Edge
Download White Paper
Edge Networking Solution for Service Providers