May 10, 2023

VP of Product Marketing

The Next Network Bottleneck – Part 1

The network as a bottleneck that slows down applications and prevents new use cases (and business) from becoming feasible seems like a very old notion.

The Next Network Bottleneck – Part 1

Dial-up Network Bottlenecks

It was the case back in the dial-up modem days and then in the 2G and early 3G days. Any multimedia application, be it video conference, streaming or online gaming with advanced graphics, was limited to a local or campus network. That’s because remote connectivity (specifically WAN and access) either made those apps impossible or created a very poor quality of experience.

To learn more download the white paper
NCNF: From Network on a Cloud to Network Cloud

But that’s ancient history. Since then, xDSL, FTTX, 4G, 5G and other technologies have removed main networking bottlenecks. It has led to huge advancements in the application world as well as the rise of over-the-top (OTT) and cloud-based applications that essentially deleted the line between LAN and WAN. 

Today’s Network Bottlenecks

These days, most application usage is agnostic to location or content. So, when you work on a document, watch a movie, or play a game, the user experience is the same – whether the file is stored locally or on a shared drive, or whether you watch a streamed 4K movie or one stored on your laptop. 

So other than just being nostalgic, why are we talking about networks in the context of bottlenecks? 

It turns out that networking’s progress created use cases in which such advancement is simply not enough, thereby making the network a bottleneck once again. 

It sounds counterintuitive, but think about the parallel computing use case. 

High-performance computing (HPC) 

Parallel, high-performance computing is a field in which the differentiation between internal I/O mechanisms (e.g., PCIe) and the external I/O mechanism (e.g.,. Ethernet) is blurred, much like the blurring lines of WAN and LAN. In parallel computing, the connectivity between multiple compute devices (e.g. CPUs, GPUs) is run over an intra-server bus or an intra-cluster network. This is possible only because networking infrastructure can now reach the same rates and performance of the internal I/O bus. 

This is great news for anyone who wishes to run large-scale compute tasks.  

There is an issue, however. And we’ll discuss it next week.  

Stay tuned!

Download White Paper

NCNF: From Network on a Cloud to Network Cloud

Read more