AI Infrastructure for Service Providers with DriveNets’ David Watson
DriveNets’ Senior Director System Engineering, David Watson, shares how DriveNets empowers service providers to monetize AI with automated, hyperscaler-inspired network operations and a high-performance, multi-tenant AI fabric enabling scalable GPU-as-a-Service.
Chapters:
- 0:00 – Intro
- 0:20 – What is DriveNets talking to attendees at MWC Barcelona?
- 1:35 – What are the AI infrastructure opportunities?
- 2:47 – What is DriveNets’ role in the AI infrastructure journey?
Read the full transcript
Introduction and Network Automation
All right, welcome back everyone to FMTV’s coverage of MWC Barcelona, and now here I am at DriveNets in Hall 2.
Thanks for having us, thanks for having us in your—
No, thanks for coming out and seeing us.
Tell me a little bit about what you’re talking to folks here in Barcelona this week before we get into deeper discussion.
Yeah, no, it’s great. This week at DriveNets there’s really two things that we’re talking about and demonstrating to our customers that come by.The first is really looking at kind of network automation, which is a really big pain point for operators.There’s one thing to build on resilient infrastructure, but the biggest pain point they run into is how do they operate it at scale, right?
How do they push software codes, how do they make changes to the network and leverage AI to make these intelligent decisions that they need on the network, and react to the anomalies that occur for this?
So our first demonstration we’re looking at is centered around that and leveraging the AI practice we have, and taking the lessons that we’ve seen from the hyperscalers as we bring that in. Leveraging the agentic AI capabilities, pulling the anomalies from the network, making intelligent decisions so that we can right now have some human interventions before we make those changes, but hopefully leverage AI in the future, and then also bring a real-time view of the network back. So it’s one thing to be able to pull information from the network and make decisions and implement them on the network, the next thing is having the visibility and look at it from there.So the first one is kind of centered around that and the savings that we can bring on the operations side. The next one we’re looking at is kind of a value-added service around GPU as a service.
AI infrastructure opportunities
Great, and so talking about AI—and you know we’re all hearing about it here this week—what do you see as the opportunity or the main role that the service providers will be able to play when it comes to AI infrastructure broadly?
It’s interesting because I think the SPs over the last couple years have continued to drive relevancy in their network, and I think AI allows them to continue to do that. They’re very well positioned as they’re looking at these AI workloads in the network, that they have infrastructure today, they have space, they have locations to where they can go and leverage that resource they have, and basically start building out these AI infrastructures, and then that gives them the credibility they’re looking at from there.
Secondly, they can start looking at GPU as a service and have a value-added services to their enterprise customers, because most enterprise customers are looking at their own AI strategy and they may not have the capital to go build. They might not have the staff talent to actually operate it, and they may want to do something very quickly. Going to a managed service provider partner like their service provider may be easier for them. So the service providers are very uniquely positioned right now to have this one solve a problem that their customers have, but also give them another stream of revenue they look at for their business services on that side.
DriveNets’ role in the AI infrastructure journey
So tell us about your role in that AI infrastructure journey.
Sure, yeah, it comes down to a couple of things.
First, it’s the highest performing AI fabric on the market today. So when we start looking at where it came from, this technology—our AI fabric—is built off a proven DriveNets operating system which has been deployed at many customers globally from that perspective. This allows for not only them to deploy at scale and be revenue generating, but it allows for one multi-tenancy which is critical for a GPU as a service, allowing them to basically scale not per customer but scale the common infrastructure across all their customers.
Now we allow this through end-to-end VOQ scheduling within the fabric itself and not relying on any external technologies to accomplish that, once again simplifying the deployment for our customer from there.
The last thing is it’s deployed. So when we go through and look at it, the solution is deployed across many neo clouds at this point, some LLM providers it’s in test, and it’s based off common technology deployed at some of the largest tier ones in the world.
Wonderful, well thank you so much again for your time and congratulations.
Wonderful, thank you, have a good afternoon.
Explore DriveNets AI Networking Solution