The more networks grow, the more complex they become. In the past 30 years, networks have increased their reliance on hardware equipment, nodes and trunks, making them difficult to expand, manage and maintain. The old rule of a “single service router” typifies much of today’s network architecture, meaning that for connectivity of any service, whether a mobile backhaul, provider edge, business service, internet peering or core network, it is tightly coupled with its own batch of dedicated chassis-based routers.
So bundle up, for the nightmares below don’t just take place in networks on Halloween but continue all year long, making this the perfect opportunity to highlight and expose the shortcomings lurking in today’s network infrastructure.
#1 The Terror Tale of Monolithic Infrastructure
Serving as the foundation for legacy networks, the traditional monolithic chassis adds extra operational complexity. Monolithic chassis-based devices come from the same vendor who built the whole network, selling it as a single black box – lacking granular visibility and control. Monolithic software serve as a potential source of compatibility issues and bugs, like during upgrades. Legacy interfaces are not adapted to operational tasks (CLI, SNMP, proprietary models). So, when there is a problem in a monolithic chassis-based network infrastructure, the whole system can be brought to a stand-still. Network complexity blinds operators when they need to increase capacity, launch a new service or fix a network issue. This blindness is costly, having a direct impact on the network’s OpEx.
#2 Scary Scalability and Chilling Costs
The limitations of monolithic infrastructure goes further than just greater complexity. Monolithic networks are not properly scalable. While traditional networking equipment vendors have made many improvements over the years to boost network capacity and scale, they never changed the underlining architecture, continuing to roll out a larger chassis and new line cards that support the latest network interfaces. However, this approach has limitations. The backplane of the old chassis can only support a few of the new modules – not a full rack. And once the chassis, small or large, fills up, it requires a costly rip-and-replace to scale. Other limitations involve power consumption, operational complexities and growing deployment costs.
While the monolithic chassis was designed to accommodate various networking scenarios, in reality they are only implemented for a specific networking scenario, resulting in inherent inefficiency. With a set of resources built as a superset to accommodate any network scenario and for any given platform, using the monolithic architecture for only a specific network scenario means that the chassis goes under-utilized. This is because most networking functions require a different set of resources, which is usually a mixture of 1 or 2 intensely used resources and other, lesser used resources.
With today’s need for overcoming the ever-growing surge in capacity demand, this becomes an expensive and scary process for supporting network needs.
#3 Trapped in an Endless Vendor Lock
Service provider infrastructures are still driven by monolithic IP routers from just a handful of vendors, selling and supporting proprietary, closed systems. In most cases, they are defined in an isolated way, addressing specific issues or technological fixes, adding higher complexity to the physical network structure. Different vendors with proprietary protocols and specific configurations, resulting in complex situations with terrifying network restraints.
With these monolithic appliances bundled with proprietary software into a vendor-locked device and integrated only with other offerings from that vendor, vendor lock-in feels like its endless and inescapable.
#4 Supply Chain Disruptions of Doom
Vendor lock exposes service providers to supply chain disruptions. Service providers end up paying a substantial premium (ranging up to 300%) for closed-branded network devices. This premium negatively impacts their ability to achieve their target price in the short run and keep down network expansion and upgrade costs in the long run.
Cisco recently announced that it was expecting significant supply chain delays due to the current chip shortage. As a result, many service providers undertook premature, price-inflated purchasing to stockpile network equipment inventory, with the aim of not having to wait long periods for future chip delivery.
#5 Frighteningly Fast Capacity Demand
While demand continues to skyrocket, the network architecture deployed by service providers has barely changed in the past two decades, squeezing service margins.
During the pandemic, service providers experienced as much as a 60% increase in Internet traffic demand, and struggled to maintain pace with these network traffic demands. Powered by nearly unprecedented work-at-home bandwidth consumption, and a huge volume of OTT (Over-the-Top) streaming media and other business services, demand only stands to increase – especially with the move to 4K/8K, linear OTT TV, IP-into-broadcast substitution, and the transition to 5G. The need to quickly meet this mega-scaling demand for capacity is truly frightening that can keep any network operations manager awake at night.
#6 The Case of the Incredible Shrinking Profitability
Declining revenues and lower margins have put a squeeze on network service provider profitability. While demand for capital expenditure increases, service providers have to invest in both existing infrastructure to meet the rapid growth in data traffic, while also looking to invest in new areas for innovation and growth. The pressure to substantially grow network scale, while controlling their costs, has created new horrors for keeping networks profitable.
Ending the Halloween Networking Nightmare: The Distributed Disaggregated Network Model!
DriveNets was established to address these scary problems and align network scale with service provider profitability, in the same way that hyperscalers solved compute and storage scaling in the cloud. DriveNets Network Cloud changes the operational and economic model of the network, allowing it to scale capacity and services much faster while increasing service provider profitability. DriveNets does this with a cloud-native networking software, standard networking white boxes and new operational and commercial models.
DriveNets Network Cloud is based on a distributed disaggregated router architecture that can reach extreme capacity by scaling linearly from a single white box router of 4 Tb/s to a cluster of hundreds of white boxes (up to 691 Tb/s) – acting as a single router entity. This scalability of a single router entity over any number of white boxes allows this solution to support any location in the network – Core, Edge, and Peering with a single software, based on only two white box building blocks. This model allows for the mix-and-match of hardware, bringing prices down as you scale-out.
This disaggregated router approach, using one software over just two types of white boxes, lowers both power consumption and the operational cost of scaling the network, making the traditional router chassis look like just a bad dream.
Download White Paper
DriveNets Network Cloud: Transforming Service Provider Networks