For any AI/HPC application
DriveNets AI Fabric is a full-stack networking solution for any GPU, cluster size, and application. It provides InfiniBand-level performance, making it the highest-performance Ethernet solution for AI back-end fabrics (scale up, scale out, and scale across), front-end, and storage networks - all with a single networking platform.

Full-stack networking for AI infrastructures

DriveNets AI Fabric portfolio provides Ethernet-based, high-performance open networking solutions supporting multiple AI networking use cases:

  • Scale out – Fabric Scheduled Ethernet (FSE): a cell-based fabric providing lossless, scalable fabric with low tail latency, that efficiently interconnects any number of AI accelerators.
  • Scale out – Endpoint Schedule Ethernet (ESE): This architecture offers endpoint-based (NIC) scheduling, according to the Ultra Ethernet Consortium (UEC) standard, providing ultra-scalable, high-performance backend fabric which supports multiple NIC vendors.
  • Scale-across – DriveNets AI Fabric can support a single AI cluster with a GPU deployment split across multiple datacenters that are over 50 miles apart. This solution delivers the highest-performing multi-site connectivity, leveraging a combination of shallow- and deep-buffer ASICs within a single infrastructure.
  • Front-end and storage networks – an integrated frontend and storage network with high-performance networking for any scale.

All solutions provide:

  • Highest job completion time (JCT) performance
  • Fastest AI Networking deployment time
  • No vendor lock – any Optics, any NIC, any GPU
  • Best performance multi-tenancy solution
  • The only solution providing multi-site lossless connectivity for widely distributed GPU clusters

Large Scale Network installations

NeoCloud

NeoCloud/GPUaaS infrastructure, either purpose built or reassigned from crypto-mining, requires exceptional scalability, flexibility, multi-site and multi-tenant support to accommodate diverse cloud-native AI workloads. DriveNets AI Fabric delivers these capabilities through its innovative fabric-scheduled or endpoint scheduled architectures — optimized for AI-driven environments. With the ability to host multiple tenants without fine-tuning, ensure resource isolation between tenants – avoiding the noisy neighbor effect, and supporting remote, multi-site deployments, it simplifies operations while providing consistent performance. DriveNets empowers NeoCloud providers to scale effortlessly and innovate without the limitations of traditional network infrastructures.

  • Best AI-fabric: Better performance than InfiniBand
  • Fastest time to revenue
  • Native support for multi-tenancy
  • Multi-site cluster without performance compromise
  • Ethernet based solution for both back-end and storage networks

Hyperscaler

AI hyperscaler networks demand unparalleled scalability, performance, and simplicity to support the needs of very-large-scale GPU clusters. DriveNets AI Fabric rises to this challenge with a software-driven architecture that supports extremely large GPU cluster sizes, leveraging Ethernet technology to ensure seamless integration and operation. Unlike traditional solutions, DriveNets eliminates the need for specialized knowledge from technical staff, simplifying deployment and management. It’s easy-to-scale, distributed, disaggregated design enables hyperscalers to expand capacity effortlessly while maintaining optimal performance and efficiency—empowering innovation without the limitations of proprietary or complex networking systems.

Best performance solution for Hyperscalers AI-fabric

  • Better performance than InfiniBand
  • Native support for multi-tenancy
  • Multi-site cluster without performance compromise
  • Unified, Ethernet based, fabric for storage and compute
  • Field proven – in large hyperscaler production

Enterprise

AI enterprise backend networks demand exceptional performance and scalability to support the growing reliance on AI-driven workloads and data-intensive applications. DriveNets AI Fabric addresses these needs with a distributed, scheduled fabric architecture—scalable, cost-effective, and optimized for AI workloads that fits all Enterprise use cases including financial research, life science and pharmaceutical research, automotive, energy & utilities and high education. Its Fabric Scheduled Ethernet and Endpoint Scheduled Ethernet technologies enable enterprises to seamlessly adapt to evolving AI demands, ensuring high performance, no vendor lock, operational simplicity, and the ability to innovate without the constraints of proprietary network designs.

  • Best Ethernet based AI-fabric
  • Better performance than InfiniBand
  • Fastest time to deploy AI clusters
  • Any Optics, any NIC, any GPU

Backend Networking

Backend networking in AI clusters refers to the interconnect infrastructure that facilitates internal communication between AI accelerators (such as GPUs) within a data center (scale-out) or between datacentes (scale-across). This is a critical piece of the AI cluster infrastructure as it accommodates the sensitive traffic enabling efficient parallel processing and data sharing.

DriveNets AI Fabric transforms scale-out and scale-across networking by combining the flexibility of Ethernet with the high-performance characteristics required for AI workloads. Unlike traditional Ethernet solutions, DriveNets AI Fabric leverages scheduled Ethernet technologies, advanced architectures that ensures lossless and predictable network performance while optimizing load balancing and latency. By eliminating packet loss and minimizing GPU idle time, DriveNets AI Fabric optimizes job completion time (JCT)—outperforming both standard Ethernet and proprietary InfiniBand technologies. This next-generation approach enables seamless scalability, cost efficiency, and superior AI workload acceleration, making it the ideal choice for AI-driven data centers.

Front-end/Storage Networking

Frontend networking in AI/HPC clusters refers to the network infrastructure that manages external data traffic between AI workloads and users, applications, or other services. It connects the AI cluster to the broader data center, cloud services, or enterprise systems. Frontend networking must provide high bandwidth, low latency, and secure connectivity to ensure seamless interaction between AI models and end-users or business applications.

Storage networking in AI clusters is responsible for handling the massive data transfer between the AI compute nodes and external storage systems. For AI workload, unlike typical HPC implementations, this is a critical infrastructure as this traffic is intense and insufficient performance of the storage fabric will result in poor overall workload performance.

DriveNets AI Fabric provides a unified solution for both networking fabrics, sharing the same Ethernet-based technology, architecture and actual implementation with the back-end fabric.

AI Networking Concepts

Network Cloud-AI white paper

What happens when you use AI to explain AI?
We used NotebookLM to turn our technical concepts into quick, 60-second deep dives.
Check out the series and explore AI networking concepts, from high-speed fabrics to low-latency clusters.

Related Content
CloudNets S4 Ep 5: Tail Latency

What's the importance of latency in AI networks. AI networks introduce new challenges that need different treatments of latency....

Read more
Scaling AI Workloads Over Multiple Sites through Lossless Connectivity

If it were up to my twins, they would spend all their waking hours on their smartphones. That’s why I...

Read more
Open Your AI Infrastructure Supply Chain

In today’s competitive artificial intelligence (AI) landscape, hyperscalers and large enterprises are rapidly recognizing the critical need for open, scalable,...

Read more