Resources

MWC 2025 – Zeus Kerravala interviews Ido Susan

How is DriveNets revolutionizing network infrastructure for AI and service providers?

In this episode of ZKast, Zeus Kerravala of ZK Research interviews DriveNets CEO & Co-founder Ido Susan at MWC25 in Barcelona. Watch the full video and read the transcript below to learn how DriveNets is revolutionizing network infrastructure for AI and service providers through innovative, cloud-based solutions.

Key Takeaways

  • DriveNets is transforming networking by virtualizing infrastructure traditionally reliant on hardware, enabling service providers, hyperscalers and neocloud providers to benefit from cloud-like scalability, flexibility, and efficiency.
  • DriveNets new AI Ethernet solution offers a cost-effective, high-performance alternative to InfiniBand and traditional Ethernet architectures by shifting intelligence to the network fabric, reducing the need for expensive NICs and optimizing multi-datacenter AI clusters.
  • DriveNets delivers turnkey AI networking solutions with integrated compute and storage, reference designs, and automation—achieving 20–30% TCO savings while helping customers maximize GPU utilization and performance.

Full Transcript

Welcome to ZKast, everyone. I’m Zeus Kerravala from ZK Research and I’m here inside the DriveNets stand at MWC25 in Barcelona. I’m with Ido Susan, the CEO of DriveNets. Ido, how you doing?

I’m doing well. Zeus, how about you?

Good, good, yeah.

Transforming Networking: From Hardware to Virtualization

So for people who don’t know who DriveNets is, who are you guys? What do you do?

Basically we are helping the service provider and the AI customers to build the network like a cloud. So instead the network will be based on hardware. We helping them to build it based on virtualization and to share resource and to benefit all the cloud benefits.

Yeah. And that’s really a fundamental shift in networking. Right. Historically, everything’s been hardware based. Yeah. Which leads to a lot of just brittleness in the network. I was at a little event you did here yesterday where all the service providers talked about how hard it is to adapt their infrastructure for new things. And so this is the mission you’re trying to solve?

Definitely. I think everybody is familiar with the compute and storage moving to the cloud and we was able to do it with the networking.

Yeah, it’s the last bastion of IT that is yet to move to the cloud. You’re right. And so what’s been going on with the company lately? How’s the business been?

In the company level, to be honest, we’re doing very, very well. We was. We are able to be cash flow positive this year with more than $200 million revenue. We have a backlog of over $1.2 billion of signing contracts. We are able to win more and more tier one service provider and AI clouds. It’s coming from hyperscale through the neoclouds for AI clusters and enterprise. So no complaint, we’re definitely on track and we still have a lot of work to do, but.

Well, that last point is an interesting one because you launched as a telco play, but you’ve picked up more hyperscalers and then also enterprises have started using you as well. So I think even that audience started to see the importance of shifting to a software model.

Definitely on the AI. As you know better than me, in order to build a supercomputer to train to do the training or inference, you need to connect a lot of GPUs together.

Yes. And we are in the mission to help our customer to achieve it. I think we have the technology, we prove it with many of them, but definitely they see us as a leader in this technology and we see it’s moving forward.

Business Momentum and Market Expansion

Yeah, I think one of the it’s been interesting watching the way the capital markets have reacted to AI, because it’s obviously when you look at Nvidia’s market cap, you know, and AMD, everyone’s aware of the importance that the GPU plays, but more and more the network’s played an important role. But to me, that still is not. It’s not as well understood as I thought it would be by now.

Let me try to explain it.

Yeah.

The challenge on the AI supercomputers is you’re spending a lot of money on the GPUs and then you need to connect all the GPUs together to achieve the best performance because you want to monetize your investment. The problem that you are limited by power and rack space. So it’s meaning that now you need to distribute your supercomputer over multiple data centers. And then the AI and then the network start to be very key player in the AI because the network start to be the supercomputer themselves. So all the AI cluster that we are talking based on the network, they want Ethernet, they want openness, and this is what we are able to deliver to our customers.

DriveNets’ AI Ethernet Solution

Yeah. And so you recently launched an AI Ethernet product, right? Yeah. Can you just talk a little bit about the product, what it is and what audience it’s addressing.

So as you know, in the market you have three type of networking solution to build your AI cluster. In the past, everybody use InfiniBand. The problem with Infiniband that its proprietary, very expensive and you are not able to converge the two clusters of the compute of the GPUs with storage. So you’re losing a lot of performance. The second option that it’s using Ethernet clos like you’re using the computer and the storage. Okay. That you have some leaf and spine architecture with some ECMP in between. And then the AI players try to introduce NIC with DPU in order to do spreading over the cluster. The problem that it’s very expensive because now you need for each GPU very expensive NIC with HBM with a lot of memory and you need to optimize the networking to the same model that you want to run. In our case, the third solution that we offering, that all the logic or the brain are in the fabric side. So using very cheap NIC and all the congestion and everything happening in the fabric side, you don’t need to optimize your network parameters. It’s always optimal to the same model, any model that you want to run. And more important, it’s designed to build to be distributed over multiple data centers.

DriveNets Competitive Differentiation

Okay. Yeah. And so there, there’s only one InfiniBand player obviously. Right. There’s many, many AI Ethernet players. And so do you feel that last point’s really your core differentiation or are there other points as well?

I think this is from technology perspective. I think this is our killer app. We have every month new installation of new customers. Even if it’s hyperscale or newcloud or enterprises are building thousands of GPUs and we again and again able to present and show a much better performance compared to any Spectrum X or any Ethernet solution that have in the market. So customer that want to get best performance from the supercomputer, the AI cluster that they are building from the GPU and all the investment that they’re doing, they coming to us. This is on the technology side. On the company side, we just delivering turnkey solution today in order to build the AI cluster, it’s not only to install the GPUs and connected the fibers to the networking. You need to optimize the entire AI cluster to be able to run. So you have NCCL testing that you need to run. When we’re doing this, we help our customer also to automate it with open source of LLM model like Llama2 that we’re running and show them the performance. So we are not only delivering the networking, we’re basically delivering turnkey solution with our reference design that it converge the compute and storage. And we are allowed to show our customer. This is what you’re getting when you buy the reference design from Nvidia. This is what you’re getting when you’re buying from us. And these are the performance level.

Okay. And are there any customers you can talk about? Maybe a little quick case study on some of this deployment.

I think we were going to have a multiple PR. You know, we are very.

So stay tuned.

Yeah, stay tuned. And we are always working side by side with our customer. In the next three months you will see many PRs that coming out and they’re impressive. I’m proud about DriveNets and my employees.

Partnerships Driving Innovation

Yeah. And you’ve also taken a partnership strategy here, right? You have some partners on the side?

Yes.

Yeah. And can you talk about who those might be and what they bring to your ecosystem? Yeah.

So of course our best partnership is starting from Broadcom that we are working side by side with them to help to design better the chipset that will fit to the use case of the AI. We took the J3 and we make J3AI that basically it’s coming without the HBM. It’s coming with many. It’s very cheap and very optimized for the training and the inference. So this is the first partnership that we have. The second we have partnership with the hardware ecosystem, with Accton that are working together with them. And as you can see here, we already see the 800G white boxes that basically we delivering to our customer, you know, 8K or up to 32,000 GPU clusters. And the last one, we have many small integrators that working with us that help us to do the rack and stack ordering the GPUs. And together we really create one plus one equals three.

TCO: 20 to 30% Cost Reduction and Better Monetization

Yeah, I saw a couple here at the booth yesterday, in fact, a couple of big integrators too, so. Such as small ones. Yeah, and so this is really, I think, fulfilling on the promise that white box has had for a while. Obviously pure white box is too complicated for a lot of enterprises. Can you give me an idea, a little bit of idea, from just a TCO perspective, what your solution would look like versus one of the more established vendors?

Yeah, definitely. From the TCO perspective, we see between 20 to 30% cost reduction and better monetization. Let me explain where it’s coming from. Today when you’re going and building with that’s the Spectrum X solution, the cluster, you basically have two clusters. You have the GPU and you have the storage and you have something that connected between them and the reference design. It’s not optimal because the big incumbent like Nvidia want to sell more. In our case, what we want, we want to provide the best efficiency from the GPU that they bought. So we have one cluster that connected the compute and storage to be the same network fabric and we have all the automation on top in order to be able to optimize a GPU to get a maximum. We see it as long as the model that they’re training are biggest and the supercomputers or the AI cluster are bigger, we’re getting more and more and more saving.

Yeah, well that’s, I mean, that’s significant, right? Considering how much companies are going to be spending on their AI infrastructure. Yeah, well, congratulations on the launch. Is there anything else you want to add?

Just thank you very much. Looking forward to meet you next year and to continue delivering our mission to provide the best technology and value to our customers.

All right, Ido, well, appreciate your time. So on behalf of Ido Susan from DriveNets. I’m Zeus Kerravala from ZK Research. Thanks for watching. Hit the like button and give us a follow too. So on behalf of Ido, I’m Zeus Kerravala and thanks for watching. See you next time on the next episode of ZKast. Thanks, Ido.

Related content for AI networking architecture

DriveNets AI Networking Solution

Latest Resources on AI Networking: Videos, White Papers, etc

Recent AI Networking blog posts from DriveNets AI networking infrastructure experts