In collaboration with AT&T, UfiSpace HPE and Intel, DriveNets has had its contribution to the Open Compute Project (OCP) DDC specification version 3 (v3) accepted. This latest version of the standard is based on DriveNets‘ Network Cloud architecture and specifically addresses mobile backhaul infrastructure, as well as Ethernet networking for high-performance AI workloads in large-scale clusters.
Driven by contributions from DriveNets, other software vendors, hardware suppliers, and network operators, the ongoing evolution of the OCP DDC specification continues to meet new market demands. The specification equips network operators and AI infrastructure builders with the tools they need to enhance the flexibility and cost-effectiveness of their networking operations.
What is Distributed Disaggregated Chassis (DDC)?
DDC is a framework designed to create a high-capacity forwarding system that is functionally equivalent to traditional monolithic modular chassis systems. The key innovation lies in the separation (aka disaggregation) of software and hardware components. Hardware components within DDC consist of standard white boxes for line cards, fabric, compute, and management connectivity. The network operating system (NOS) software enables these white boxes to operate as a unified forwarding system.
The first version (v1) of the DDC specification, contributed by AT&T, focused on the distributed disaggregated model and on software and hardware interoperability, while DDC v2 introduced high-capacity 400G white boxes. In this latest iteration, v3, support has been added for new devices based on Broadcom’s Jericho3 (J3) and Ramon3 (R3) ASICs. Moreover, v3 further extends the OCP DDC’s capabilities into mobile networks. Additionally, v3 streamlines network management and addresses higher scale requirements, particularly suited for AI networking.
At present, DDC specifications leverage Broadcom’s DNX fabric technology. However, it’s worth noting that in the future, the DDC concept can potentially be extended to encompass other merchant silicon technologies that offer equivalent fabric capabilities for scale-out networking.
What are the highlights of v3 DDC routing system specification?
DDC v3 introduces Jericho3/Ramon3 architecture for 800Gbps and 922Tbps systems.
Highlights of DDC v3 routing system specification include:
- Encouraging increased collaboration and vendor adoption of the base specification for the development of 800Gbps DDC platforms utilizing Broadcom’s J3 and R3 ASICs
- Significantly boosting capacity from 192Tbps to 922Tbps (equivalent to 64 * 14.4Tbps)
- Enhancing port density on line-cards and fabric (port exhaustion occurs before bandwidth exhaustion)
- Ensuring interoperability between existing 192Tbps clusters and new 922Tbps clusters
- Providing support for two distinct stock-keeping units (SKUs) to accommodate industry-standard form factors, such as QSFP-DD and OSFP
- Introducing network timing features, including new PTP (Precision Time Protocol) and IEEE 1588v2 support, with timing enabled through the Ramon3 fabric
- Outlining the network operating system (NOS) roadmap and associated enhancements
- Presenting the network cloud controller (NCC) roadmap, detailing the transition from Generation 10 to Generation 11 as well as the shift from a 2U to a 1U form factor
- Improving MultiUpdater (MU) functionality with solid-state drive (SSD) firmware enhancements
- Planning to continue using air cooling (no liquid cooling)
- Highlighting the development of Intel Virtual RAID on CPU (Intel VROC) functions for the NCP5 (Jericho3 AI white box) and NCC products
Distributed Disaggregated Chassis (DDC) v3 cluster topology and capacity with purely NCP5 and NCF3 (Source: OCP Distributed Disaggregated Chassis Routing System Evolution (V3))
What does the DDC architecture offer service providers?
DDC architecture empowers service providers and hyperscalers to optimize the scale and performance of their networks using standard networking white boxes. By eliminating the interdependencies inherent in complex chassis-based architectures, DDC architecture accelerates development cycles and introduces new hardware in record time, facilitating rapid infrastructure deployment and operational cost optimization for service providers.
We would be happy to meet with you and discuss these issues during the 2023 OCP Global Summit in San Jose, California, on October 17–19, 2023.
Download White Paper
Utilizing Distributed Disaggregated Chassis (DDC) for Back-End AI Networking Fabric