ISC 2025 – InsideHPC interviews DriveNets on Ethernet-based AI Networking
At ISC 2025, DriveNets’ Head of Product – AI Networking, Yossi Kikozashvili, met with Doug Black, Editor-in-Chief at InsideHPC, to discuss the technology shifts in AI networking. In this candid conversation, they cover the unique value of DriveNets’ AI Ethernet solution that is used as a back-end network fabric for large GPU clusters and storage networking solution and how it supports the high-performance Ethernet alternative to InfiniBand. DriveNets’ AI networking solutions are used by hyperscalers, NeoClouds, and enterprise customers. Watch the full interview below.
Chapters:
- 0:00 Intro and About DriveNets
- 2:17 Latest News about DriveNets
- 3:59 What makes DriveNets different?
- 6:19 Ethernet Scheduled Fabric Architecture
Key Takeaways from the interview
- DriveNets powers the world’s largest AI and Telco networks
DriveNets now powers over 70% of AT&T’s core network and plays a major role in Comcast’s Janus project, both proof of its dominance in high-scale routing. More recently, DriveNets extended its technology to AI infrastructure, enabling hyperscalers to build massive GPU clusters with its Ethernet fabric solution. - DriveNets brings an AI Networking solution with peak performance out-of-the-box
Unlike traditional networking vendors, DriveNets delivers an end-to-end AI infrastructure stack—not just the networking layer. Its Fabric-Scheduled Ethernet uniquely provides out-of-the-box peak performance—no fine-tuning needed. This approach eliminates the complex configuration typically required, giving customers faster, simpler deployments. - DriveNets fabric scheduled Ethernet is solving industry-wide pain points in AI Networking
With groundbreaking architecture and new products co-developed with leading infrastructure players, DriveNets innovations address urgent challenges in AI networking, that no one else has solved yet.
Read the full transcript
Hi everyone, I’m Doug Black at insideHPC. We’re at the DriveNets booth at the ISC 2025 conference in Hamburg. And with me now is Yossi Kikozashvili. He is head of Product AI Infrastructure.
What is DriveNets
And Yossi, for those who are unfamiliar, give us an overview of what DriveNets does?
Sure, and thanks Doug, for having me. So basically DriveNets is a high scale networking company. We were founded just short of a decade now. We started our journey with the attempt to disrupt Internet service provider networks where basically we kind of pioneered the disaggregation concept for routing use cases. High scale routing use cases. So basically what we offered to Internet service providers was the ability to replace their legacy black box chassis with a disaggregated distributed white box based routing component. And it went pretty successful, to be honest. Fast forward to today. We are running more than 70% of AT&T’s core network, which is pretty dramatic. We are actually a big part of Comcast’s Janus project that they just recently announced, transforming their entire network. And basically we are running some of the largest Tier 1 Internet service providers networks in the globe. Now that’s one business unit that we have in the company. That’s what we started with. Right then it went pretty successful. And approximately two years ago we kind of realized that the technology we already have for the Internet service providers might be a great fit for the AI infrastructure use case. And so we took the same technology, adapted it, enhanced it, and now we have a whole business unit that is targeted towards those folks who are building huge GPU clusters.
What is the latest at DriveNets
Okay, so DriveNets recently ran an article on on insideHPC that got a lot of interest. You all also recently put out some news. Can you talk to us a little bit about that?
Sure, yeah. So the last few months were very exciting in terms of production deployments.
We just recently announced a huge production deployment with a tier one hyperscaler utilizing many, many GPUs. Thousands of GPUs with our fabric solution.
We also announced just recently the first deployment, production deployment of our fabric with the GB200 NVL72 of Nvidia.
So we did that and then we also announced just recently the first production deployment of DriveNets networking inside a NeoCloud, a GPU as a service infrastructure.
And that is directly attached to one of the major announcements we just did, which was around our product support for multi tenancy use cases. That’s also something we announced.
We also did a large press release about our ability to provide our customers with the ability to unify front end fabric and back end fabric, which was also something big for us.
And then the last, last but not least, we just announced our DAS group, which is basically stands for DriveNets Infrastructure Services. And this is a bunch of people inside DriveNets that helps our customers build end to end AI infrastructure. So yeah, a lot of announcements, strong momentum, and we’re going on.
What is the difference between DriveNets and the competition?
Okay, great. Now we all know the AI networks market is exploding. How does DriveNet distinguish itself from other networks vendors?
So that’s a great question, Doug. Basically, I think we are distinguished by this perception that we have that networking is only one piece of the puzzle when it comes to AI infrastructure. Right. So unlike any other networking vendor that is out there, we are not just looking at the networking part of the AI infrastructure, we’re looking end to end. And how does our networking solution assimilate itself into the entire AI infrastructure stack? So take for example everything that we do from our flagship product, that is the fabric schedule Ethernet. There’s a lot of good things about it, right? But the bottom line of it, or maybe the most influential thing about it, is that it allows our customer to deploy it and just simply have pick performance out of the box. Right. So we don’t want to provide new problems for our customers. Right? Like problems like, you know, fine tuning the network, trying to find, you know, the right sweet spot with buffer tuning and ECN thresholds and PFC thresholds. No, we look at it from end to end and then everything we do from products to services is designed to give our customers a turnkey solution, something that will be easy and fast. Right. I mentioned our fabric scheduled Ethernet which is targeting this end to end approach. Right.
But in addition to that, we also have a group of people that I also mentioned that’s called the DriveNet Infrastructure Services (DIS). Now you can imagine that as a networking vendor, we’re just bringing in the networking part and that’s it. Right. But we’re doing a bit more than that. The DIS group is designed to help our customers take their entire stack, GPUs, compute, storage, networking mix, you name it, and create a cluster out of it. Right. So to summarize a very long monologue in just a few simple words, we’re looking at the end to end stack. That’s what differentiates us.
How does fabric scheduled Ethernet which compare to Ethernet Clos and other architectures?
I see. So now you mentioned the scheduled fabric architecture. How does that compare in performance to the Ethernet Clos and other architectures?
So I think the fabric scheduled Ethernet has five main characteristics that kind of make it outperform any other solution in the market.
First off is the end to end scheduling nature of it. So basically we have an ethernet fabric with a bit of magic into it. We schedule packets from leaf to leaf. Right. That’s one.
Second is the fact that we do very unique load balancing across the fabric. If you know the entire industry is based on hashing or ECMP or packet spraying. We’re doing something that’s called cell spraying, which is allowing us to utilize 100% of the fabric. Right. So that’s second.
Third is our use of VOQs. We use VOQs to provide better multi tenancy support and better multi tenancy isolation. Right.
Fourth is the fact that we. Everything that’s related to fault tolerance. Right. Is based on hardware. So anytime a link goes down or when a switch goes down, the system reacts by hardware and not by software, which is dramatic because then you can cut recovery times from seconds or milliseconds to microseconds.
And fifth, and I already mentioned that, and that’s pretty much the biggest story here, is the fact that the system provides a peak performance just out of the box. No need to fine tune buffers, no need to fine tune DCQCN stuff. And, you know, is rocket science. You just get it out of the box and it simply performs.
Okay, so we’ve been visiting with DriveNets here at ISC. And Yossi, thanks so much for your time.
Thank you. Thank you, Doug.
Want to learn how DriveNets is reshaping AI networking infrastructure?
Explore DriveNets AI Networking Solution