April 6, 2022

Head of Product Strategy

Overcome Challenges at the Telco-Edge!

With the deployment of network functions and services at the telco-edge cloud creating cost, scalability, and service-agility challenges for service-providers, Lx explored how the buildout of the telco-edge with a cloud-native infrastructure will allow operators to overcome those challenges.

Overcome Challenges at the Telco-Edge!

Is Your Network Ready to Meet Workloads on the Edge?

As more and more latency-sensitive applications emerge – networking and compute functions need to move towards the network edge – closer to the end user.  This trend also optimizes traffic distribution across the network as content-delivery networks become more distributed and content instances also reside at the network edge. The new network edge requires tight integration of edge computing and edge networking. This means that the network edge should be able to efficiently handle applications that require computing resources, as well as meet a growing volume of traffic.

Transforming the Network at the Edge

To make all this possible, services must “park” themselves at the edge. From an operational aspect, the edge will need to shape itself to host such services. Unlike the centralized core-out model, the edge doesn’t necessarily have what it takes for this in terms of real estate, power budget, management and operational overhead, as well as a range of other small issues (ie, issues that are considered minor when aggregated in a centralized data center, but a huge burden once they are spread into hundreds or thousands of locations).

The metro was built to hold network elements located in the network PoPs and act as a high-capacity routing network node. Building out the telco-edge with a cloud-native/edge-native function will provide the infrastructure for any of the incoming edge-native services which are set out to overwhelm network edges with in-network compute resources. Such a setup eliminates any of the overhead that would be involved in building micro data centers in the numerous network PoPs.

A Disaggregated, Cloud-Native Infrastructure Model

DriveNets Network Cloud addresses these challenges by combining networking and compute resources over a shared, cloud-like infrastructure that allow operators to put greater functionality at the network edge, even in sites with space and power limitations. This allows network and cloud operators to improve their service quality, enable new, latency-sensitive services and optimize the traffic distribution across their network.

Scale Efficiently and Cost-effectively at the Edge

The DriveNets approach to the telco-edge makes edge computing and edge networking highly scalable and cost/space efficient, allowing efficient scaling of the control plane independently of the data plane to support extensive VPNs at the edge.

Operators can avoid monolithic router interdependencies and quality issues by disaggregating the control & data planes through optimized hardware and software for each networking plane.

This cloud-native architecture is future proof and based on open standards, with Open Compute Project (OCP) DDC-compliant hardware that leverages a microservice architecture.

Operators are Embracing Disaggregated, Cloud-Native Architectures

More and more operators are embracing disaggregated, cloud-native architectures as part of their next-generation network deployments. DriveNets was recently recognized as one of the industry’s leading vendors for the Disaggregated Distributed Backbone Routing (DDBR) solutions by the operator-driven Telecom Infra Project (TIP). KDDI, also the chair of the TIP Disaggregated Open Router (DOR) subgroup, announced the completion of commercial testing for a DriveNets DDBR solution as a gateway peering router, showing enough capabilities to be deployed into KDDI production networks.

Stating the challenges that the pandemic put on AT&T’s network, Andre Fuetsch, Executive VP & Chief Technology Officer for AT&T Services, shared that “Our global networks carry more than 393 petabytes of network traffic on an average day. That’s up 20% as compared to pre-pandemic figures.”

In a 2020, AT&T announced the deployment of the DriveNets solution throughout their core network.  As the largest backbone network in the US, DriveNets Network Cloud software is running over multiple large clusters of white boxes in core locations of AT&T’s network.

AT&T’s distributed disaggregated chassis (DDC) design, which was built on Broadcom’s powerful Jericho2 family of merchant chips, aims to define a standard set of configurable building blocks on less costly service-class routers ranging from a single-line card systems, to large, disaggregated chassis clusters.

Through vendor partnerships with white box vendors that include UfiSpace, Edgecore Networks and Delta, DriveNets has already created an open ecosystem for disaggregation using flexible software and standard commercial off-the-shelf white boxes. White boxes and disaggregation also hold the promise for breaking up vendor lock-in to traditional integrated vendor offerings.

Moving to the Edge

The deployment of an open, disaggregated cloud-native routing platforms coupled with next gen long haul 400G optical transport platform is exactly what large-scale operators need to handle the tsunami of demand that will be generated at the edge by services like 5G, fiber-based, broadband and entertainment content services.

A disaggregated, cloud-native model, will allow operators to launch and manage services much more quickly especially at the latency-sensitive telco-edge.

At MPLS SD & AI Net World 2022 in Paris, my colleague Lx Renner presented a session on ‘Building a Cloud-Native Telco-Edge.’  For more information on MPLS SD & AI Net World 2022 and Lx’s session at the show, visit: https://www.uppersideconferences.com/mpls-sdn-nfv/mplswc_2022_agenda_day_2.html#debut

Download White Paper

DriveNets Multiservice - Maximizing Infrastructure Utilization

Read more