Resources

Cloud Nets VideosSeptember 30, 2021

Season 1 CloudNets

Episode 1 Network Cloud

Everyone is talking about Network Cloud? So, what is network cloud actually?

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome to Cloud Nets, where networks meet cloud.

But what does it mean networks that meet cloud, Network Cloud, that everyone is talking about? What is it? So, there’s a short answer, and there’s a long answer. The long answer is that network cloud is a cloud-native, software based, high scale networking infrastructure that can run multiple network functions over disaggregated and distributed hardware. Got it? No?

Let’s write a short version. The short version, you only need to remember three things in order to understand what is Network Cloud. One is distribution and disaggregation, DND if you may. Two, natural cloud software-based infrastructure, and three, orchestration.

Now, let’s talk about each of those items for a minute. Distributed and disaggregation, item number one. Distributed and disaggregation means that you disaggregate software from hardware, but in the hardware part, in the Network Cloud white boxes part, you group multiple boxes into one cluster. That means that you look at all those boxes, at multiple boxes, multiple white boxes, as a shared unified infrastructure pool. A pool of resources that serves the upper layer application.

Why should you care? Because disaggregating software from hardware is simply not enough, you need the ability to use a few types of white boxes across your entire network. You have a couple of types of white boxes in your repository, you build from them, any type and any scale of site in your network, and this is very important.

The second part is the cloud-native software architecture. That means that those network functions, those applications, those service instances, are running on the hardware as microservices in containers. That means that you can run multiple network functions over the same architecture, of the same hardware, and this is important for you, you should care because running a single software instance, or a single network function over the hardware, it’s simply a poor use of those hardware resources, which are very expensive.

And three, orchestration. Orchestration is the automation of processes and tasks that has to do with service management, with the infrastructure management, with basically managing the entire Network Cloud. Why should you care? Because otherwise, if you do those tasks manually, this will simply not scale to the size of the network you want.
So, the three things you need to remember in order to know what is Network Cloud: One, distributed and disaggregated infrastructure. Two, cloud-native software architecture, and three, that’s right, orchestration.

Now, you know what is Network Cloud and I will see you next time for more. Thank you very much for watching.

Episode 2 Distributed Disaggregated Networks

Why is the disaggregation we know for years not enough?

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
Today we’re going to talk about disaggregation, but even more than disaggregation, we’re going to talk about distributed disaggregation. Why do we need it? Why is the disaggregation we’ve known for years not enough?
The long answer is that distributed disaggregation takes open networks to the next level. It allows you to have flexibility, scalability and agility in your network with very simple, basic building blocks. Not clear? Let’s talk about the short answer.
In the short answer you need to remember only three things in order to understand the essence of distributed disaggregation. One, basic building blocks. Two, multi-layer disaggregation, and three, clusters.
Okay, one: Basic building blocks. We take just two building blocks, the NCP and the NCF. That’s the Network Cloud Packet Forwarder and the Network Cloud Fabric. Those two building blocks come in a very compact form, just two-rack-unit high, and that means that from those two building blocks you can build any size in any scale of network.
That means, and this is interesting for you, because you have less part numbers in your network, the entire plan-to-deploy cycle is simpler and easier in terms of engineering, in terms of deployment, and in terms of maintenance and spare parts. Avoiding vendor lock-in was never that easy.
Number two: Multi-level disaggregation. That means that we do not only disaggregate hardware from software, we also disaggregate the control plane from the user plane, and in the user plane we also disaggregate the fabric from the packet forwarding, the port. So, if you need more ports, you upgrade only this part of the network. You need more control processing power; you upgrade only the control part. If you need a higher capacity fabric, you upgrade only the fabric. This is interesting for you because it gives you the granularity to upgrade just the parts you need, and this means much more efficiency in the way you use your hardware in the network.
And number three, cluster. Taking multiple white boxes and grouping them together and then abstracting them towards the application layer, means that you can have any size of network or any size of site, with multiple boxes that look like a single hardware instance. That means that those multiple boxes are grouped together into a shared, unified pool of resources. Those resources are compute CPU resources, networking forwarding resources, TCAM memory resources, etc. Everything is blended together into a single pool of resources, and this is important for you, because using cluster and a shared pool of resources means the leanest way to scale your network. You can upgrade your network, but still have the best utilization of your hardware resources.
So, three things in order to understand distribution and disaggregation, basic building blocks, multi-level disaggregation, and cluster.
Thank you for watching, see you next time.

Episode 3 Multiservices over a Shared Virtualized Infrastructure

This is about running multiple network functions over the same shared virtualized network infrastructure.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
Today we’re going to talk about cloud-native or multiservice infrastructure and for that we brought our cloud-native or multiservice infrastructure expert, Run Almog.
That’s me.
Run. Thank you for joining.
Thank you.
Okay. So, what’s multiservice? We keep talking about it, but let’s understand what it is. The long answer is that just like in cloud, multiservice is a system in which you run multiple services or multiple network functions over the same shared virtualized infrastructure and with that, you can achieve hyperscale economics and a lot of efficiency and cost performance benefits.
Run, what’s the short answer?
I’ll break it down into three pieces. First off, its multiservice, right? The numerity of the services. Second, is the infrastructure or the architecture underneath the microservice architecture and third, is the resource sharing, which is as a result of it.
So, I’ll explain each and every one of them as a stand-alone. Multiservice is the ability to run a different service on top of an existing infrastructure, so you don’t have to have a dedicated hardware to run each and every one of your services in the network.
Second, is the microservice architecture; this is how we built it. In order to have multiple services, which is something that hyperscalers are doing all the time, you need to have an efficient infrastructure that supports this. So, one service will not be interrupted and will go independently without interrupting other services, which exists on the same infrastructure. This is what the microservice architecture grants us.
And we put those in containers in order to separate between the different parts.
Exactly, exactly. In order to operate them in a simple manner, we containerize each and every one of these and we are enabling to launch each and every such containerized service in a simple manner.
The outcome of this is better resource sharing, you have multiple resources which are network oriented in your infrastructure. In order to best utilize all of these, you have different services that utilize different resources in a different way in a different time. You want to have a mechanism that enables each and every service the required resources when it’s needed, where it’s needed. This is the outcome of microservice architecture and multi-service as we are implementing it.
So, practically when you put very different services on top of the same infrastructure, each utilizes different parts and different resources and therefore, you get very good utilization of this infrastructure which we paid for.
Exactly, exactly. Because the services have variants between them, if you use a dedicated box to run each and every one service, then there is a lot of wasted resources in it.
Okay. Now we understand what is cloud-native and multiservice architecture.
Remember, you need to remember only three things. One, is the multiservice itself running multiple services over an infrastructure. The second, is the microservice architecture, the fact that you run those in silos, in containers separated from each other and the third, is the outcome, the resource sharing, the great resource utilization you get when you run cloud-native multiservice applications. Now we know.
Thank you very much Run for explaining and thank you for watching and understanding.

Episode 4 Network Orchestration

Orchestration is the secret sauce that puts everything together in the distributed disaggregated system.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi, you’re back with us.
Good for you and good for us.
Welcome back to Cloud Nets, where networks meet cloud.
And today in Cloud Nets, we’re going to talk about Orchestration.
Orchestration is a very important thing and for that
I brought our Orchestration Expert, Run.
Thank you for joining.
Thank you for inviting me.
What is Orchestration and why do we need to talk about it?

First, the long answer.
The long answer is that Orchestration is the secret sauce that puts everything together in the Distributed Disaggregated system. You need something to act like a virtual distributed chassis and connect all the bits and pieces into one harmonized infrastructure.

What is the short answer?
One of the three things you need to remember in order to understand what is Orchestration.. Run?

Three things.
First is Automation.
Second is Service Introduction,
and the third is Simplified Operations.
I’ll explain.
First off, Automation.
You need to look at the cluster, not as if it’s one huge problem that you need to solve, but it’s a collection of many small problems which repeat themselves all the time. So enabling an automatic process in order to solve multiple smaller problems one time and time again makes it a very simple element to maneuver or to operate.

Cool. Number 2?

Two is about service introduction.
Launching a new service into action is as simple as a click of a button, because it’s all based on software. Once the infrastructure is already in place, launching or turning on a new software, a new service, a new pilot in a new geography is a click of a button, and analyzing these services as they come into play becomes very, very simple. Innovation becomes enabled as a result of this. Some call it the CI/CD. It’s just like CI/CD, only, in this case, more practical approach of CI/CD, not the theoretical kind of obstacle that CI/CD brought into the game. simplified operations.

The third item is about that the same cluster is in place, and it serves multiple different groups within the service provider. These groups are used to looking at that device as if it’s their own, so they’re going to be granted with the same look and feel as what they had until now. A same, a similar networking look and feel. Even though that same infrastructure is serving other groups, they will not go… This will go undetected. So if I’m in networking, I will still see a router. If I’m a security guy, I will still see a firewall, even though they use the same infrastructure. Precisely that, precisely that.

Ok, that’s great.
And those are 3 things we learned today that explain what is Orchestration.
So the first was Automation.
The second was Service Introduction in a very fast and efficient manner,
and the third was Simplified Operations.

Thank you for watching.
Thank you for joining.
See you next time on
Cloud Nets. Bye bye.

Episode 5 Networking Optimized Infrastructure

It is taking networking functions and running them on networking-optimized infrastructure.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi, you’re back.
You couldn’t resist it.
Welcome to Cloud Nets, where networks meet cloud.
And today we’re going to talk about Networking Optimized.
And for that we brought our Networking Optimized Expert: Run.
Thank you for joining. -Thank you for inviting me.

What’s Networking Optimized? Because if you look at the long answer, Networking Optimized is taking network functions, usually data plane functions and running them on Networking Optimized Infrastructure. Infrastructure that includes ASICs, NPUs, that optimize the way we handled networking functions.

So let’s talk about the short answer.
Ok. So I will break it down into 3 pillars.
First is NPU offload, then Compute and Networking and then footprint at the edge, right?
And I’ll explain each and every one of them.

First of NPU offload.
Network functions need to run on network oriented devices.
I mean, you can do anything you want on a CPU in terms of its flexibility, but it’s not really efficient. So you want to have network oriented operations done by a network oriented device. And this is where the NPU comes into play. In terms of efficiency, it’s a one to 10 more efficient process than running it on a CPU.
So if you try to run it on a server, you’ll end up with 10 times the size and footprint?
Exactly. 10 times the footprint. More or less 10 times the footprint.

Second is about the computer networking.
Wherever you have an instance of compute cloud meets the network, you have an operational hazard. Right? And there are a lot of elements. There is a lot of orchestration that needs to be put into place. There is a lot of management elements that need to be orchestrated and handled into this interface. This is cumbersome. What we are doing or what we are trying to do is interleave the network into the compute instance.
So this interface, this painful interface, will cease to exist.

Number 3 is about the footprint at the edge.
You do need compute at the edge. It makes a lot of sense. Edge compute is all over the place. But adding more equipment into this location, which is already very tightly packed and very scarce in terms of in terms of real estate, is difficult. So taking the existing equipment that you already have in place and utilizing it to the maximum effect of it and it has some compute in it makes a lot more sense than just pouring more and more servers into that same location.

One, two, three.
Ok, so now we know what is networking optimized.
Basically, you need to remember: it’s about NPU offload, it’s about compute and networking infrastructure, and it’s about the footprint at your edge sites.

Thank you very much for watching. Thank you for explaining.
See you next time.

Episode 6 Network Cloud for Multi-vendor

So third parties, multi-vendor: what are we talking about? Because DriveNets Network Cloud infrastructure can run not only the DriveNets applications or network functions, but also third party applications.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
Today, we’re going to talk about Multi-vendor Network Cloud, because DriveNets Network Cloud infrastructure can run not only the DriveNets applications or network functions, but also third party applications. And in order to understand this, we need to talk to an expert
and this is Run. -Expert.

Run, so, third parties, multi-vendor: what are we talking about?

It breaks down to three things.
First off, it’s the architecture.
What kind of architecture are you running? Second, acceleration and third is placement. I’ll explain each and every one of them. Architecture means that you want to have an ability to manage both the network and the services that are running on top of the network from the same platform that saves you the trouble of managing two different platforms at the same time.

Second is acceleration.
The application running on top has some functions which are more network related and some which are more compute related. You want the right ASIC to do the right task. That accelerates how the application is running.

And third is placement.
You want to have the ability to position this application anywhere you want in the network. That enables you the ability to operate the function where it matters the most.

Three things. – Excellent.

So to conclude, the Network Cloud Multi-Vendor approach means that we have the architecture to host third parties on top of the networking optimized infrastructure that enables to accelerate and make them more efficient and the ability to place them anywhere in the network.

Exactly. -Thank you, Run, for explaining and thank you for joining. See you next time.
-Thank you.

Episode 7 High Availability

Today, we’re going to talk about high availability in the Network Cloud, because in spite of what you may think, the Network Cloud Infrastructure presents higher availability than the traditional chassis.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi, welcome back to Cloud Nets, where networks meet cloud.
Today, we’re going to talk about high availability in the Network Cloud, because in spite of what you may think, the Network Cloud infrastructure presents higher availability than the traditional chassis.
And to understand that we need an expert!
Let’s call the expert, Run.
Hi! So, how come? How come?

Three things.
First off, its failure rate.
Second, it’s the failure radius.
And third, it’s the network maintenance.
I’ll explain each and every one.

Failure rate means how often do we actually have a box which is collapsing? When you have simpler boxes, you have a lower failure rate. Network Cloud is using a standard white box, ODM pizza box, much simpler as an equipment, therefore a lower failure rate – MTBF.

Second is the failure radius.
The bigger they are, the harder they fall.
Right? When you have a big chassis and it collapses, it damages the complete network. When you have one small box which is collapsing, it’s small damage. – Blast radius.
Easy.

Third is the networkmaintenance.
You want to have the ability to operate certain functions on the network, like an upgrade or a change in the network without actually implementing or impacting the availability of the network at all. This is what Network Cloud is able to do because it is a distributed network. So it’s like a surgical accuracy. In a way, it’s kind of breaking the problem into many small problems and then solving small
problems is much easier.

Great. So, the Network Cloud infrastructure presents higher reliability, which is important for, well, everyone in the days of internet serving every aspect of our life.
And this is because of the lower failure rate or the longer MTBF, the smaller blast radius or failure radius and network maintenance procedures that are isolated from the rest of the network.

Thank you, Run, for this.
Thank you for joining. And see you next time on Cloud Nets.

-Thank you.

Episode 8 Network TCO

Disaggregation of hardware and software, software based cloud-native instances, etc, these all have a dramatic effect on your total cost of ownership. So, when you go to disaggregation, the Network Cloud, you save a lot of money.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
And today we’re going to talk about TCO – total cost of ownership, and for that we brought our TCO analysis specialist – Run.
Yes. Hi, Run.
Ok, so, TCO and disaggregation, TCO and Network Cloud, we need to understand that the Network Cloud means building the network like you build the cloud infrastructure. Disaggregation of hardware and software, software based cloud native instances, etc, etc.
These all have a dramatic effect on your total cost of ownership.
The fact that you use white boxes, the fact that you can scale services easily, the fact that you can deploy new services in a much faster pace.
Run, what is the short answer?

Three points.

First off, it’s a lower cost hardware.
When you’re getting a white box, the one thing that you’re not paying for is the fact that there is no logo on the box. Hence, why it’s a white box. Exactly. So that’s one thing. The hardware is the same hardware, the same quality of the hardware, but it just costs less.

The second item is about the resource utilization. It’s not just about how much capacity you have, but what you make of it, right? And when you can multiply several applications on top of the same infrastructure, then you better utilize that resource and use it as kind of a pool, just like it’s being done on the cloud. So not only do you use lower cost boxes, you need less boxes.
You need fewer boxes? Correct.

And the third item is about automating your operations. This is exactly what has been done on the cloud. When you have a lot of small problems, applying automation to this is easier than trying to use automation in order to solve one big problem. This is exactly what disaggregation gives you as you build a network.

Ok, so basically, what we’re saying and what you need to remember is that when you go to disaggregation, when you go
to the Network Cloud, you save a lot of money. Your total cost of ownership is much lower because you use white boxes, which are, you know, less costly, then logoed boxes.
You need fewer boxes because you use them more efficiently and your entire operation, your OPEX is reduced because you use automation and you have better flexibility to accommodate what you need into what you have and to better use your infrastructure.
That’s our TCO story for today. Thank you for watching.
Thank you for telling us. See you next time. Bye bye.

Episode 9 Edge In – Core Out

Edge compute, edge networking, Telco Cloud. All those buzzwords mean one thing: the content, the essence of the network, the cloud instances are moving to the edge, to the edge of the network, to the edge of the cloud. That means they want to be closer to the subscribers.

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
Today, we’re going to talk about edge and core.
Practically, edge is in, core is out.
And for that, we brought our expert for edge-core in and out – Run! -And beyond.
-And beyond.
Thank you for joining, Run.
Edge compute, edge networking, Telco Cloud. All those buzzwords mean one thing: the content, the essence of the network, the cloud instances are moving to the edge,
to the edge of the network, to the edge of the cloud. That means they want to be closer to the subscribers.
They need to be closer to you who consume the content and all the good stuff that there is
in the cloud. But there is a right way and there’s an efficient way to do it. And for that, Run, what do we need to know?
Basically, three points.
Surprise.
Yeah, it’s a surprising three points. Yeah.

First off is that you want to utilize your network in an efficient way. Taking your service from where it’s being consumed all the way to the core, and back consumes a lot of resource
of the network in between. You want to avoid that, so you want to get the services closer to the subscriber.

Point number two: you want as many service points to be available.
You want it distributed. So if you have fewer points, then you cover a lot, a lot more distance in order to get to this, so you want them distributed.

And thirdly, you want to have this connection point where the network meets the cloud reduced. Whenever you have the network meeting the cloud, it’s a failure point of the network.
You need to apply redundancy over redundancy. There is a lot of overhead and there is a lot of cost embedded in this connection point. What you want to do is minimize this connection point and in fact, embed the service directly onto the network.

Ok, so that’s interesting.
So first, and this is the things we need to remember in order to do this
edge trend efficiently and the right way. We need to take the cloud and the instances of the compute and the workloads as close as possible to the customer. So this reduces all the traffic
going to the core and back and burns out a lot of resources.

Second, you want to distribute it as much as possible because the more points of service or workloads you have across the entire infrastructure, the greater
the chance you will be close to one of those.

And the third,
and this is, I think, the most important one is that you want to blend, you want to merge
the infrastructure of the cloud of the workloads of the compute with the network infrastructure
because this is more efficient and this is how you avoid this cloud network interfaces, which cost a lot of money and cause a lot of trouble.

Did I get it right?
-You got it right.

The bottom line is that you want to have the service as close as possible to the subscribers.
You can’t get it closer than being within the network itself.
-Absolutely.
Thank you very much, Run.
Thank you for joining.
See you next time on Cloud Nets.
Bye bye.

Episode 10: Disaggregated Networks Standardization

Standardization is especially important for disaggregated networks, since software and hardware come from different vendors, the interconnect between those needs to standardized. But how does it reflect on the network cloud world?

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
Today, we have a very, very boring topic: standardization, and because we talk about standardization, we brought our not so standard expert –
Run. Expert on standardization.
Yeah, so standardization is important because especially in the disaggregated
world, when you take software and hardware from different vendors, you need to talk about the interconnect between those, the interfaces, and you need to standardize those, just like you do in the mobile world where 3GPP defines standard open interfaces and you can take different parts of the network from different vendors.
But how does it reflect in the network cloud world, Run?

Well, surprisingly, it breaks down to three pillars.
All right? As opposed to what, you know, the common thinking would be that disaggregation is, kind of, alluding into proprietary. It’s not. It’s actually open. First off, the standards, the definitions of the boxes are based on OCP certification. So when OCP certifies a device OCP is Open Compute Project. There is a committee. It’s coming from the industry. It looks into
the specs, into the definitions, and it approves it. Whenever these items are being changed, the committee needs to kind of reconvene and reapprove it.
The conclusion to that is that the spec, the resulting spec is also open. Therefore, any hardware manufacturer that likes to kind of step in and provide these devices and build them already has a written spec, just download a spec and implement.

So that’s item number one. And OCP created DDC, right?
Distributed Disaggregated Chassis. Exactly. And the example there was a spec coming from AT&T. They made the contribution. Obviously, it was reviewed by the entire committee and then published and now it’s out and open. You can just Google it and you’ll find DDC, AT&T, OCP. You will find the spec as is. And you can build your own DDC.
-Yeah, yeah.

Number two: the TIP, the Telecom Infra Project have kind of stepped into this this domain as well, and they made definitions as to what does… not a DDC, now it’s called DDBR,
but essentially it’s the same thing. What does it need to do in terms of routing functionality so that pool that set of functionality is as well defined as well as what are the interfaces of this device, this eventually resulting device, how does it interface to the management level, to the management plane or management devices, software that orchestrates the complete network?
So that’s item number two.

Item number three – and what’s interesting because both TIP and OCP have taken more of a practical approach into making these standards. Let’s put it all in place. Let’s get the inputs from the vendors as what they have already implemented, and let’s practically build something
that works and then wrap around a standard on top of something which
is already working versus the previous, say method, that was kind of “let’s define everything and then allow the industry to implement according to what is already fully
detailed and a lot more cumbersome”.
You can say that’s waterfall versus an agile approach, but when it’s applied to standardization.
So the bottom line, in a nutshell, is that those industry bodies, TIP, OCP take a practical approach and define the architecture in order to avoid or overcome vendor control proprietary implementation of disaggregated systems.
-Exactly. That’s the logic.
Thank you very much, Run.
Thank you for watching.
See you next time on Cloud Nets.

Episode 11: Scale Smart for Networks

Networks today need to scale, requiring more and more capacity. They also need to introduce new services in order to stay competitive and to satisfy the needs of the hungry subscribers. How do we deal with a data tornado?

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome back to Cloud Nets, where networks meet cloud.
And today we’re going to talk about scaling.
And we have our scaled expert…Run,
So we’re going to talk about scale out, but we’re going to talk about how to scale out smartly.
And, you know, networks today need to scale, they need more and more and more and more capacity. And they also need to introduce
new services, much more services, lucrative services in order to stay competitive in order to satisfy the needs of us, the hungry subscribers.

Run, how do we do it?
How do we deal with a data tornado? It’s not just data tornado, it’s also a services tornado. So this is spiking and this is spiking.

So, three points here.
First off, you want to increase overall capacity. The model of a disaggregated router or network infrastructure enables you to add more and more boxes onto that same, essentially the same device. The same network element can build itself out within this CLOS topology of adding more and more, sort of, peak line cards and then capacity is increased in terms of port count,
in terms of throughput. So, no forklift. Just add granular boxes? No. That “beep-beep-beep”
of the forklift… no. We don’t have that. You can carry it. Yeah, it’s cool, cool, cool. And even with my back, I can pick it up.

Number two. Number two is about the services. Services are applied onto the existing
infrastructure. Now, when you have a dedicated device which serves only one purpose, then you need multiple devices and they are poorly utilized. Because, whenever I want a new service, I need a new device or appliance to be introduced. -Exactly. So once you introduce a new service, you want to introduce it onto your existing infrastructure. And when that service grows, you want to grow it also within that infrastructure, so you don’t need again that same forklift from before to take something out and to bring something new in bounds.

Number three is how do you do all of this in a smart, fast, easily orchestrated manner?
And this is where DNOR, our DriveNets Network Orchestrator comes into play in terms of allocating the right resources for the right service, enabling you an easier way to take that service and launch it onto the network. This is where orchestration of all of these services onto that infrastructure comes into play. Ok, and I think that when we talk about scaling smart, we talk about scaling. First of all, with lower TCO because we better utilize the boxes, we don’t throw away old boxes, we put multiple instances over the same infrastructure and do it in an orchestrated manner. And of course, our ability to introduce new services is much, much faster because you don’t need new boxes, you just introduce a new Docker Container and put the service in it on the already existing infrastructure. So this is how you scale smart.
Don’t forget.

Scale like cloud. -Yeah.

Thank you very much, Run.
Thank you for watching.
See you next time on Cloud Nets.
Bye bye. -Bye Bye.

Episode 12: Network Sustainability

With regulators more and more talking about zero carbon emissions, going green and sustainability are high priorities. Service providers are feeling the push and need to put things into place in their operations. Why is sustainability so important for network operators?

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi, and welcome back to Cloud Nets, where networks meet cloud.
And today we save the Earth.
Today we’re going to talk about sustainability and our Green Expert – Run – has joined us.
I don’t have green on my shirt…
Ok, but it’s all black…

Ok, so, sustainability, why is it important other than the obvious, you know, we want our children to live in a planet which survives?
Regulators are talking about zero emissions, zero carbon emission. Operators need to comply with.
We see it in a lot of RFPs. We need to understand how to make our network zero emission, green, sustainable. Sustainability.
What are the tools we have? It becomes a real issue, of this zero carbon and CSPs are kind of forced or pushed into it.
The timeline could be 2024, 2025, maybe even later, but you need to put things in place already at this point.
So first off… There are 3 things, right?

Item number one is about this measurement of power per port. Usually, this is the common
practice that all the hardware vendors are pushing in – kind of to promote how they build their hardware and essentially how much wattage is being consumed for one specific port. That’s not really relevant. What’s really relevant is how effective is the power utilization onto that port.
So how much you make out of the power that you spend and not how many ports can you light up is what matters.

Item number two is the ability to apply multiple use cases onto that same infrastructure. This is what pushes up your resource utilization.
So again, how much wattage are you investing into your infrastructure is one. How much are you pulling out of it is the real item of interest.

And item number three is talking about what’s known as circular economy. Circular economy is looking into that same network equipment, which is being used in the network and serving its purpose, to what extent can you repurpose that same equipment, either while it’s active or later on when it’s kind of being forked out of the network? So use it like in our place network of another application. Exactly. When you build your network from small bricks, small building blocks, then you can repurpose these building blocks into building a different type of a network, different size of a network node, a standalone equipment versus a cluster, devices which are more oriented towards the access of the network versus those who are more on the aggregation and core. It’s a matter of size, and when you have one building block, you can reposition it essentially saying wherever you want in network, and that kind of extends the life expectancy of each and every box. So even if you need less capacity in a certain location and more in the other, you can take one white box from this site to the other and you avoid having an unused box. Exactly. Exactly. Repurposing of equipment.

Ok. So, again, this is a very important topic, and I think that the bottom line we need to remember is that, you know, better ASIX is great. The better power per port performance is good. But I think what is more important is how much do you utilize the port? How many services? What essence do you pull out of this port? And then the measurement is power per service and not power per bite or per port. Exactly. Save money, save the Planet. Ok. And on that note, thank you very much, Run.

Thank you for watching. See you
next time on the Cloud Nets.

Episode 13: Network Cloud Our Real World Experience

Why is it important?
As a newcomer into the networking space, the DriveNets disaggregated approach faced doubts.
Explore how service providers have experienced the benefits of network disaggregation

Listen on your favorite podcast platform

Listen on Apple Podcasts
Listen on Spotify

Full Transcript

Hi and welcome to a very special episode of Cloud Nets, where networks meet cloud.
Because the value of Network Cloud is not theoretical anymore.
We have stories from the field, and for that, we brought our man in the field – Run.
That’s me.
Sitting by the fireside and talking about experience with the Network Cloud.
So, Run, tell us stories about the natural cloud in the field.
Yeah, a few stories.
As a newcomer, as an entrant into the network, we had some doubts placed upon us. And in a way, a few stories really tell what the details, what the details are. So, obviously there are issues with protocols and how we interoperate with other devices. That’s well known. That’s common. We have our experts in the field and they know how to fix the network protocol issues and connectivity. Easy stuff.
That’s not really the issue. But there are some items which really kind of boil down what the benefit of disaggregation.
And I will give a couple of experiences that we had.

We had one case where we needed to enlarge a network node.
It grew faster than what was expected, so it was…
We talk about a live network.
Yeah, yeah. This is this is a live network scenario, a real deployment, running live traffic and this point of presence actually needed to grow faster than what was planned. So there were no devices. So orders were put in place, obviously, because it’s a standard customized, off-the-shelf white box. It was easy to obtain these devices, even though they were not preordered in a year in advance. So we could get these devices onto the side. But you needed to get the right people to make the installation. Now this device was located in a data center and what we did because essentially it’s just adding more standard white boxes and just applying the connectivity. It automatically adheres itself to the existing infrastructure and everything is done automatically.
So the skills that required from the one who…
It was essentially zero skill set. We use the guys on site, the guys, you know, operating the data center, doing all the day to day connectivity issues. And there were the ones that took in the order, put the box into the… kind of connected physically the box into the network and then from that point onwards, everything was done remotely via software.
So essentially, we doubled the capacity of that site within 48 hours from the order.
Forty eight hours and the capacity, the site was doubled in capacity.
Zero downtime during the process. Everybody is happy and you could not do
that with the traditional method. This is where disaggregation comes into play.
Wow. Ok.
Really exiting.
That’s a good story. Do you have another one?

There was another case, which was, you know, on the down side of things.
You know, networks break, devices break, there is always issues,
there are always issues of failure and we also experienced such a case…
And it could be hardware related, software related…
In this case, it was hardware related, which is kind of…
Again, kind of put upon us because we are not the hardware provider and sometimes the hardware fails. That was an NCF, one of the fabric devices, and again, a live production site and one of the NCF failed. We needed to put in a new one, a new one in place. So, you know, accessing the spare repository and bringing in a new NCF.
NCF is the Natural Cloud Fabric.
Exactly. The fabric element, the fabric element of the cluster.
Taking out the faulty device, putting in the new one, rearranging the connectivity accordingly. Everything was done, again, with local personnel. But the interesting thing is that all of this happened with zero outage to the network.
Usually when the fabric fails, you have one heck of a blast radius.
Not only that, it’s the blast radius when you have a fabric element in a chassis. It’s the amount of risk that you’re taking because one fabric device impacts the entire chassis.
Accessing the chassis – sometimes it’s from the rear, pulling it out.
It impacts the power distribution within the chassis. I had cases, you know, in my history, where you pull out a card and another card, code resets, as a result.
You don’t have that when you have a disaggregated model.
So, you took, like, the most stringent scenario of your fabric device failing, and essentially it resulted in zero downtime to the network.
Wow, this is amazing. And again, these are real stories.

You know, it’s not the script we wrote. It’s not a marketing pitch.
We could not invent that script.
Ok, so thank you very much, Run. Off you go back to the field…
Back to the field. -Thank you for watching.
Can I finish my tea? -Yeah, but do it fast.
Ok, thank you very much. Thank you for watching.
See you next time on Cloud Nets. Thanks. Bye bye.