When it comes to the networking infrastructure, though, network slicing usually translates to the different types of VPNs (VLAN, MPLS, SR and others). Such implementations require (in the case of hard-network slicing) an allocation of infrastructure resources (compute, networking, TCAM etc.) to each slice.
The Challenges of Network Slicing
But network slicing can be a dynamic beast. Yes, you have a network slice per service type (typically – a slice for real-time applications, a slice for best-effort ones and a slice for control and management plane traffic), but you can also have a slice per customer (if you provide a “Slice-as-a-Service” VPN, or even a virtual network) and, when you introduce a new type of service, you might need to create a new slice from scratch. A new slice means reallocating resources, creating a new set of network functions/application workloads and, in some cases, creating some peering interfaces between the new slice and the existing network slices.
When it comes to time and/or latency-sensitive applications (such as 5G URLLC use cases), the above-mentioned tasks become much more important. Service and workload placement, resource allocation, and peering point creation should be done according to the networking infrastructure status. What is the available capacity and current latency at each part or node of the network? What are the available resources (compute and networking) at the different sites that are required for the network slice? There are many possible questions.
Enter Network Cloud – The Significant Value Of Drivenets’ Network Cloud Architecture
This is where the Network Cloud architecture brings significant value. The different parameters you need to take into consideration when building a new network slice are dynamic. The available capacity, average latency, and required resources can change within seconds. The only way to build a network slice that truly reflects all of the above is to use a dynamic architecture – available only through the Network Cloud.
Because Network Cloud uses a unified pool of resources and can allocate each resource to a different workload (i.e, a service instance or network function) means you can create a network slice that will optimize a set of parameters (e.g., capacity, latency and resources) and keep those always optimized by dynamically reallocating the required resources to the network slice, whenever network and infrastructure conditions change.
An example to such change could be a sport or culture event in which resources need to be redirected towards the event’s venue. Not only could the time-sensitive slice (e.g., a public-safety virtual network slice) be allocated with additional resources to serve this specific event, but also the application layer workloads of this slice can be duplicated and/or ported to an available compute resource which is closer to the venue (reducing latency).
And it’s not just resources. The architecture itself can be dynamic. DriveNets’ Network Cloud architecture allows you to “rewire” your “physical” connectivity in a virtual manner. Connecting different network functions within a slice or creating peering points between network slices does not require any physical change. Since all network slices and workloads/network functions reside over the same hardware infrastructure, moving resources from one slice to another or duplicating and placing a network function is just a matter of software change (managed by the orchestration system).
Those benefits are relevant and valuable in any network, but when it comes to a sliced network, the Network Cloud architecture becomes a must.
Download White Paper
Introducing DriveNets Network Cloud