May 26, 2021

VP of Product Marketing

400Gbps is the New 100Gbps

While 400Gbps is not new, in 2021 it has become a mainstream connectivity building block, much like 100 Gbps was until not so long ago. There are several reasons for this, the most trivial being the growth in capacity requirements in all parts of the network and in all types of networks – from fixed and mobile through data center connectivity. But what’s made this move feasible is the fact that 400Gbps connectivity lately has become much denser, cheaper, simpler, and longer (reach).

400Gbps is the New 100Gbps

400Gbps connectivity gets denser

One of the previous drawbacks of 400Gbps connectivity was its footprint, namely the number of 400Gbps ports that could be accommodated in a given rack space. This number has increased dramatically of late. Take, for example, the recently announced new NCP (Network Cloud Packet Forwarder) supported by DriveNets, which more than tripled port density per rack unit to 16x400Gbps ports in a single rack unit. (This translates to a total of 32x400Gbps ports in a 2RU white box with a total capacity of 12.8Tbps).

This leads not only to reduced total cost of ownership (TCO) with regards to real estate-related costs, but also to increased feasibility of pushing 400Gbps connectivity to space-limited sites. These sites could be edge sites or micro data centers (MDCs) in which high-capacity connectivity is required but limited by available rack space.

400Gbps connectivity gets cheaper

Other than the aforementioned effect to TCO, the direct costs of 400Gbps are continuously dropping. This is most noticeable in the open-architecture solution domain, in which hardware and optics selection is left to the end customer, rather than dictated by the provider of a monolithic solution. In this domain, the marginal cost of 400Gbps connectivity hardware is dropping rapidly, with more than a 30% drop in the last year alone.

400Gbps connectivity get simpler

400Gbps interfaces started as a rather cumbersome aggregation of 50-100 Gbps signals, sent over separated fiber pairs, resulting in an expensive optical cable with 4-8 pairs (SR8, FR4 etc.). These lanes (of 50 or 100Gbps) use pulse amplitude modulation 4-level (PAM-4); this modulation carries 2 bits per optical symbol, as opposed to the non-return-to-zero (NRZ) modulation of lower-rate interfaces, carrying a single bit per optical symbol.

Both parameters – the number of fibers required and the lane rate (derived from the bits per symbol ratio) – have improved significantly since the introduction of 400Gbps connectivity. The need for separated fiber was eliminated by the introduction of wavelength division multiplexing (CWDM in FR and LR, and DWDM in ZR and ZR+); xWDM allows multiple lanes to be transmitted over the same fiber, utilizing different wavelengths (e.g., FR4 runs 4x100Gbps over a single duplex fiber, utilizing 4 wavelengths).

Moreover, ZR and ZR+ moved to coherent optical modulation. Specifically, these standards use DP-16QAM (dual-polarization 16-quadrature amplitude modulation), accommodating 4 bits per symbol for each polarization, and a ratio of 8 between the baud rate (which is approximately 60Gbaud) and the bit rate (over 400Gbps) – 4 times the ratio in other 400Gbps standards. This eliminates the need for xWDM to multiplex different lanes within the signal and frees up the WDM domain to achieve further simplicity and longer reach.

400Gbps connectivity gets longer (reach)

The reach of 400Gbps connectivity also received a boost lately, with the maturity of the ZR and, even more than that, ZR+ standards. These standards not only introduced coherent optics, allowing the move to a single lane transmission, but also added inherent DWDM capabilities (with 100GHz and, in the future, 75GHz spacing). This eliminates the need for an external DWDM transponder and, more importantly, allows native support in optical amplification systems such as EDFA (erbium-doped fiber amplifier) and hybrid EDFA-Raman, which allows a reach of over 1,000km.

Download White Paper

NCNF: From "Network on a Cloud" to "Network Cloud"

Read more