From Core-out to Edge-in
Traditional network architecture (and how it shapes the internet as an infrastructure of services) is built from the core and out. Services are located in the core of the network and are accessed from all over the network over connectivity infrastructure.
The resulting network is sufficient for many uses, but it may not be the network we need for many other new services which are expected to join our lives (which, by the way, some of them already have…) Applications such as video conferencing or streaming media are impacted by latency, which is driven by the proximity of the user to the service point. This means that positioning the service at the core of the network increases the latency and the latency variation vis-à-vis a service point that is positioned closer to the user. Clearly, the user experience is significantly impacted by this.
Service endpoints basically rely on compute resources, specifically x86 servers grouped in large data centers in a centralized manner. Shifting from this centralized approach to a decentralized one to cater to the distributed service model means that these compute resources will need to be repositioned within the network’s metro areas.
The Challenge and the Opportunity
So if the challenge in taking this approach is clear, why do it in the first place? Isn’t the existing network just “good enough”? Isn’t this yet another network evolution similar to what we have experienced every few years for the last three decades?
Well, no, not really.
In the past few years, services have become more user focused. COVID-19 didn’t trigger it, but it sure accelerated it, and telco providers were caught unprepared for such sped-up evolution. In parallel, cloud native applications have been developed and are being planned to be launched also through the upcoming 5G networks. These applications are what’s known as “edge-native”, since they are developed in such a manner that they are close to end-users and in multiple locations (aka cloud-native).
During “The Great Telco Debate” event in December 2021, TelecomTV’s Ray LeMaistre interviewed Stephen Spellicy, VMWare’s VP of service provider marketing. At the interview, Stephen shared his view that CSPs are not quite ready yet, and that they must rethink their steps moving forward.
Indeed, they should.
Edge-in is not only the application
To make all this possible, services must learn how to “park” themselves at the edge. From an operational aspect, the edge will need to shape itself to host such services. Unlike the centralized core-out model, the edge doesn’t necessarily have what it takes for this in terms of real estate, power budget, management and operational overhead, in addition to a range of other, small issues (ie, issues that are considered minor when aggregated in a centralized data center, but a huge burden once they are spread into hundreds or thousands of locations.)
The metro is built to hold network elements, such as DriveNets Network Cloud, located in the network PoPs and acting as a high-capacity routing network node. Built as a cloud-native/edge-native function, it becomes the infrastructure for any of the incoming edge-native services which are set out to swamp network edges with in-network compute resources. Such setup eliminates any of the overheads involved with building micro data centers in the numerous network PoPs.
As Spellicy mentioned in the interview at The Great Telco Debate, today, there are more than 80 million workloads headed from premise to cloud. Given DriveNets interactions and experiences with several service providers around the world, we agree that the evolution to cloud-native networks is here.
Is your network ready to meet these workloads on the edge of town?
Or, in other words, is your network a DriveNets Network Cloud?
Download White Paper
NCNF: From Network on a Cloud to Network Cloud