Resources

The Operational Impacts of Supporting a Disaggregated, Distributed, Cloud-based Network Architecture

At NANOG 87 (2023)

Aliraza Bhimani, a principal network engineer at Comcast Cable, gave an insightful presentation on “The Operational Impacts of Supporting a Disaggregated, Distributed, Cloud-based Network Architecture” at NANOG 87 (2023), sharing some of Comcast’s insights from working with DDC/DDBR (Distributed Disaggregated Chassis/Backbone Router) architecture, sometimes referred to as “virtual chassis,” and discussed how network disaggregation helps simplify Comcast’s operational complexities.


Full Transcript

For our next session I’d like to introduce Aliraz Bhimani.

Over there.

Okay, cool. He’s on his way up. Who will be presenting the operational impacts of supporting a disaggregated distributed cloud based network architecture. Ali is a principal network engineer and team lead at Comcast Cable and is coming to us today from New Jersey. This is Ali’s first time presenting in Anag. It’s a pleasure to have him speaking with us today. Welcome Ali.

Good morning everyone. I hope you can hear me. Here we go. Good morning everyone and welcome to Nanog 87. My name is Ali Bhimani. Just want to take a quick opportunity to thank nanog and the nanog Program Committee for giving me this opportunity today to talk to you about the operational impacts of supporting a disaggregated distributed cloud based network architecture. So let’s dive in and dissect that long title. Managing and maintaining today’s highly scalable networks is very complex. Supporting today’s IP network traffic demands are driven by video gaming and as we’ve seen recently with the Pandemic working from home, future growth trends are based on 5G and 6G cellular traffic volumes which increase port and bandwidth demands and commercial services Port Growth, DOCSIS 4.0 Multiple Symmetrical Gigabit speeds. And as Elad Nuffi just kind of went over and all the growth of that that we’re seeing with 10G port capacity increases, when we start with 100 gig, 400 gig, 800 gig, we’re even seeing up to 1.2 terabits per port. The Internet usage has increased by an astounding 1355% over the last 22 years. As my esteemed colleague Manan Venkatesen, Distinguished Engineer at Comcast Cable noted in his NANOG 812021 presentation, there’s a significant amount of port growth that we need to support. We all know the Internet traffic has exploded within the last decade. The ports you would need to support the traffic are also proportional to the traffic volume. So we need to make sure we stay on top of the port capacity that we support on the core and aggregation routers. Cable operators now have to rethink and re architect their existing IP networks and the operations to maximize performance and efficiency. You also have to consider the total cost of ownership of various hardware and software options. There’s always a trade off between feature rich software and bare bones hardware. End to end, not just that of an individual component or a software option. It’s critical for network operators to understand what’s needed for end to end delivery rather than just the individual component costs and specs which may not paint the whole picture. Time to market of new services and keeping your network operations simple so that your operations team can troubleshoot it much easier are very key points. Reducing the blast radius and impact of a component failure right if you lose one router on the potential one side and then if you lose the router on the redundant side, you have the potential to impact millions of customers and put the network in the hazardous condition and also isolate the network, either the one managed by you or that of a peer. So reducing that blast radius is really important. And then all of these components combine to keep availability of the network so not to management. If Your management needs 4 nines, 5 nines, 8 nines, however many nines your management needs, all of these key components play into the availability of the network as a whole. So in this presentation we’ll dive into disaggregated distributed chassis or DDC and disaggregated distributed backbone router or ddbr and how these open source specs can solve operational challenges. When building and scaling your IP network cable operators can leverage these building blocks and implement real time concepts of the DDC and DDBR architecture. One of the main important points is that you’ll have to utilize orchestration, automation and analytics. That’s going to be table stakes as we’ll discuss a little further and how to shift the IP backbone architecture from a single chassis to a disaggregated distributed scalable solution. We’ll also touch upon how we can overcome operational challenges and what are some of the pitfall considerations that we ran into while proactively positioning ourselves for future disaggregated network solutions. So disaggregation as a success criterion for ISPs some of the key points are that disaggregation is driving competition. We’re seeing a lot of opportunities now for new players and new vendors to enter the market, which is driving costs down, which creates a little more competition. This is breeding innovation. There’s open software, open hardware that improves flexibility and breeds innovation while reducing time to market. This model also allows operators to purchase capacity incrementally as it’s needed. Operational efficiency. Taking advantage of the centralized control and monitoring tools that we’ll talk about will increase reliability, right? Always targeting higher availability and multi level redundancy while minimizing blast radius impact to decrease customer impact and outages. So this is what DDC DDBR looks like in the network. The system can be placed in the IP MPLS backbone in the access layer, and can also act as an Internet gateway router, all while being hardware agnostic for its network operating system. What are some of the differences between traditional routers and this new concept and what’s common? So, just like regular routers, you have line cards, fabric modules, route processor modules, power supplies, and fans. What’s unique is modular chassis. Routers use pluggable line cards, whereas the new system uses standalone pizza boxes. Modular chassis cannot span multiple racks if your power or cooling is limited. So if you have any space issues or cooling any restrictions at your facility, this is where the disaggregation router will come into play. That we’ll see. Modular chassis can only run proprietary software, right? So now if you have a modular chassis vendor, it’s responsible for support of both the hardware and the software. So now you’re dependent on that router vendor for support of both. Whereas multiple vendors are involved in the DDC and DDBR solution, Modular chassis scale up whereas the DDC scales out. So if a service provider needs more ports, more line cards can be bought and inserted into the traditional system. But with that system, the growth ceiling for this type is dependent on how many available slots there are on the modular chassis. And then one option to scale up in that system is that you purchase a totally new chassis with either higher port density, denser line cards, and then you replace the old one if you don’t have enough slots. But this an example like that would be say if you have a 10 slot chassis and you replace it with a 20 slot chassis. But in that case, you know, you’re kind of doing like surgery on your existing network. DDC DDBR has a lot of external wiring that we’ll see. And a lot that you’ll see with traditional routers or refrigerator routers mainly scale up in hardware where. The DDC DDBR allows for best in breed in hardware and the software. So now ISPs and CSPs are now making a significant effort to take these concepts and apply them to the routed core and network infrastructure or the wide area network. And figure two, as you’ve mostly recognized, this is the front and back view of a modular chassis that we’re used to. And a generic modular chassis routing system. So now you’ll be like, all right, Ali, enough with the acronyms and the mumbo jumbo, let’s get into the meat and potatoes, right? What exactly is a disaggregated routing system? So basically you take everything that you know of a traditional router, the Route processor, fabric modules and line cards and they get all disaggregated onto either a white box, one ru router or commercial off the shelf x86 servers. So if you have a fabric module which represents the spine or back plane, you put it onto a fabric forwarder pizza box. If you have a line card, you just take that concept, you put it onto a packet forwarder which represents the leaf or the cluster of line cards and you put it on a pizza box. You take the brains of the system where you have like the routing engine concept, the routing stacks run on redundant servers in a cloud native fashion and you put it on a pizza box. So when in doubt, pizza box it up. The external ethernet switches connect all of this distributed communications channel and the backend and connect all of these components together. This routing system can be implemented across multiple physical racks. Because now we’re connecting a mall via portable pizza boxes. So if you have any proximity issues at your head end or data center, this concept makes it portable and flexible. Now you have the ability to scale vertically by adding more fabric forwarders. If you need to increase the fabric throughput or redundancy. And the same to accommodate future port demands and growth, you can add more packet forwarders or the line cards. This combination of the spine and leaf based architecture and disaggregation can lead to phenomenal advantages and innovations. Now, ISPs can easily change the routing network operating system vendors while using the same hardware. So now you have the ability to freely mix and match different white box service hardware and have custom vendor software applications or containers which we’ll talk about. This combination also promotes more open source concepts. So you have APIs now available to hook into this custom routing platform. And then some vendors take it a step forward and make all of these different components seem like one virtual chassis. Figure 3 shows a three stage non blocking cloth topology. So the cloth topology is a data center architecture that’s been around for quite a while, where you have the leaf and spine architecture which is relevant to modern scalable carriers and networks. And it’s a three stage which has the ingress, middle and egress where the ingress and egress fold on top of each other. And now we’re basically just taking this concept and bringing into the core. Now it’s possible to divert traffic away from a segment of spines and leaves and upgrade the software or the configurations of a specific component while having traffic run across the rest of the spine and leaf cluster. This allows potential for seamless integration onto the existing network while Maintaining a small failure domain. You can even take it up a step further with N M design, which allows hitless maintenances and updates and increases your availability. Now, a single failure of a component that you had in your regular modular chassis router won’t affect traffic in multiple directions and reduces the impact of services. So if you had a fabric module that was having errors and affecting your line cards on your whole router, now with this system, it’s compartmentalized. What are some of the challenges for this responsibility domain? Right, you have hardware and software. So now this two vendor solution requires an understanding of which domain problem belongs to where. So is it a hardware problem, is it a software problem? So there’s a need for cooperative collaboration between your hardware and software vendors during troubleshooting of such systems so that the root cause of an issue can be reached quickly and amicably. So you kind of want to try to avoid the finger pointing and just have some collaboration and find, find the root cause. Future launches of virtual chassis in the network will be done by an orchestration system. So automation is key in again, table stakes. A lot of the support. That we need for this comprehensive validation method. As we’ll see, there’s a lot of spaghetti fabric cabling on the back end and management of the code upgrades. Right? So if you have so many different components and you have to upgrade the code, how are you going to do that? If you have so many different components and each of them have their own device configurations, how are you going to manage that? So another important activity of the engineering and operations team is auditing the provision components and how they can be maintained or incorporated into the orchestration system, alleviating the need for engineers to do it manually. So this is what the network cloud looks like, where we basically talked about that. Offloading the network function onto a dedicated NPU and similar offset is made by SmartNIC providers who offload that function to a data processing unit and accelerate it. So as you can see here we have the packet forwarders or the fabrics that we’re used to, the line carts and then they connect to the fabric white boxes which in turn connect to the x86 servers or the routing engine that we connected. And all of this is cloud based, so you’ll have all of your containers in the cloud. This is a network cloud figure of the DDC DDBR architecture that we talked about, where basically you take, you know, each of the components and you put it into distributed disaggregated chassis. Now this allows the operators to scale out as this rearchitected chassis by simply adding more white boxes that perform either function. Figure 6 shows a spine and leaf cluster versus what a virtual chassis within a Bacborn core site looks like. So as you can see on the first side in a typical spinal leaf, you’ll have an ISIS and BGP mesh that needs to be maintained within the cluster and with the external cord network. So each of these components on the spine and leaf are separately managed objects that have their own loopback addresses, their own routing protocols, their own config file. And then the one on the right is the virtual chassis, which eliminates the requirement for an IGP based intra cluster control plane. And now you only have one instance of an ISIS or bgp. So now network architects can focus on mission critical services rather than assuring the SLAs within the cluster. So real life deployment of the DDC DDBR model. What did we see? This architecture is a field proven concept deployed in the core and aggregation layer of the largest telcos and cable codes in the world. There are several tools and dashboards that are needed to monitor this production grade cloud network. Right now. On the one hand you have the virtual cluster, right? Which is easier for the nod to manage and view the network and respond accordingly to the events within the system. When you have this reduction of the number of nodes in the IGP domain, it reduces complexity and the load on the routers. And if you have that virtual chassis capability, you don’t need any BGP enhancements that are required to improve recovery time during convergence. Now you have an abstract abstraction layer which is introduced which hides all of the complexity in the clone network and appears as a single router, which is so familiar to network engineers. So what is the future of routing looking like? Right? So you have a lot of sysadmin skills that are needed now for network engineers if you’re used to CLI command as was in the past. Now with this introduction of this new DDC architecture, you have a lot more server based. Components. So now you have to kind of marry up your old UNIX server skills, your networking skills, and then see what the future looks like with more DevOps programmability scripting and leveraging automation tools which will utilize all this and make a future DevOps network engineer, right? That’s what like the future is going to look like. A lot of model driven approach or Yang configuration and telemetry and having API versus CLI driven configuration and telemetry, right? This eliminates human errors in repetitive tasks such as configuring each component. Now network Engineers and architects need to learn how to operate and maintain container orchestration platforms like Kubernetes and Docker. This complexity of the code and scale limitations, along with reliability, among many others. If you have a software code bug, it could affect both your software and your hardware. So how do you get around this? Microservices allows a relatively large application to be divided into smaller parts having their own autonomy. With microservices in the containers, it’s simpler to take advantage of the hardware and now you can easily orchestrate services. The majority are based on cloud native distributed architecture. So some of the challenges with traditional router architecture, just like everything, you have pros and cons for each and you always have challenges and. Some of the notable reliance, you have very limited number of suppliers, the market, which is extremely difficult to enter and compete. And you may have some interoperability across different hardware components. Like I said, you know, example, if you have a fabric module that’s going bad now, it can affect the line cards on your whole router. In a modular chassis. There’s also availability concerns when replacing or upgrading a component inside the chassis. So now if one component or a line card is going bad on your router, you might have to take traffic off that router and you’re in a hazardous condition. Data and control plane closely tied together leads to significant dependency on the vendor’s roadmap. Right? So now if you need a denser line card or, or you’re wanting a new software feature, you’re dependent on your router vendor to give that and one thing may affect the other. So the best hardware vendor isn’t necessarily the best software vendor and vice versa. So now. Traditional router architecture, chassis limitations and trade offs. Also, you need to have resiliency to maintain uninterrupted service for all your customers. You also need significant computing and storage capability to store all your V4 and V6 routing cables. This requires a large TCAM and strong computing capability with deep buffers and port density and capacity enough to support the growth of your customers. So again, like we said, the growth ceiling that we have for large port density is the number of slots for your line cards and the backplane switching capacity which is limited to a fixed number of slots on a single chassis. All of this needs to be maintained with high availability at all levels. With control, processing, switching, fabric cooling and power. A lot of varieties of sophisticated control plane features would be needed such as non stop routing or in service software upgrade. Same. On the other hand, you have challenges and trade offs with the spine and leaf architecture. Now you have packet forwarders which enable operators to utilize any port on the white box for the service, regardless of the implementation area. Now you have multiple network operating system vendors in the ecosystem which can install their software onto the open networking hardware. The clusters should be built in a way where your bandwidth is equal or over provisioned to the leaf ports, right? So now installation of these. Should be kind of the roadmap should be enough for you to not have to kind of go back and re plumb any of that, right? So when you’re building, since there’s a lot of back end fabric fiber or ethernet connections that you know we fondly refer to as spaghetti wiring that must be installed, that’s a lot of upfront cost that you need to put in effort for this architecture. So performing the physical install work is initially you do it for a longer roadmap. So when forecasting growth and installing with the mindset of room to grow, this eliminates the need to come back and install any additional fabric forwarders or you disrupting the system later. Also, unlike scaling up in a single chassis router, when the system grows beyond its size, all the client ports have to be re plumbed to another chassis, whereas the disaggregated model scales out and consists of only adding another packet forwarder or two. This positions the network to consider any unforecasted growth surges, such as a lot of the networks we experienced during the COVID 19 pandemic. This basically follows if you remember the old school commercial of a Rotisserie. You just set it and forget it. So this is Figure 7 shows a multi service network cloud architecture and basically combines the networking and computing resources over a shared cloud infrastructure and allows operators to put greater functionality at the network edge even with space and power limitations. So now any port on the cluster designated for a specific function can be used to enable the service. In the past we had core edge and backbone nodes that were separate units. Now they can be aggregated onto unified cloud native infrastructure. So now with this DDC and virtual chassis solutions, right? With the proliferation of the Internet of things, it’s become apparent that it is beyond the bounds of possibility for humans to manage them. So you must automate configuration management and maintenance such as possible to streamline the deployment, right? Traditionally, each network device that is closed or locked from installing third party software, as an example, only has a command line interface. Although CLI is still well known and may even be preferred method of access by some network professionals. Now it’s clear that it doesn’t offer the flexibility that’s required to truly manage and operate this brave new Internet. You need to enable the system to avoid human error, since some configuration files can be prone to typos and errors. Imagine operating a disaggregated cluster comprised of say like 50 or so components or nodes, right? Each node needs its own loopback for management and its own configuration file. In the past, with traditional routers, the network engineer had to create the 50 separate config files, load them onto each component in the cluster. But now engineers should get familiar with network automation tools and methodologies that offer an API and start automation of configuration of each of these nodes in the Spine and Leaf cluster. One of the most optimal ways to push and retrieve the configuration from the device is via the HTTP based APIs like restful and non restful APIs. Another one is netconf, which is a network management protocol conceived specifically for configuration management and retrieving operational state data. With an orchestration automation analytics system, you can now treat all of the nodes like a virtual cluster. So now this kind of gets you the best of the both worlds where you have the benefits of a traditional system with the flexibility of Spine and Leaf. In order to leverage these tools now, network professionals need to understand the data formats like xml, JSON, YAML and a data modeling language like Yang. With this, all of these components now how are you going to monitor it right now if you have so many back end connections, you have so many different components, your orchestration tool needs to automate an event and KPI monitoring for the cluster topology node states. Are all of your connections up? Are there any dirty links, any ports that are bouncing, formation and connectivity across these clusters? The now that you have the server aspect onto your networking infrastructure, what does each component look like as far as CPU and memory? Do you have enough of each? What does the environmental of each box look like? Are there any power issues. Ports and interfaces like we talked about? And on the software side, base components, what does your base OS look like? Are there any firmware issues, any processes. Any issues with your containers or microservices? These are all the things to kind of now be aware in this new network architecture. And as Elad also kind of noted, how are you going to monitor all this? Right? So now you have moved from an SNP based polling now to streaming telemetry, right? You want to know as soon as possible whenever there’s an issue with your components or your infrastructure. And you’ll need to scale, right? Does the collector should not need to pull each router Individually, right? You can do that a lot more smarter and define different sets of counters for collection from specific routers. So what does the future look like? The future of disaggregated solutions. The only constant in life is change. If you ever heard of that from Heraclitus. The pace of innovation must be kept up with the demand and new emergent use cases. My family and I just recently moved to a home and we had to get all new appliances and furnish the home. And again, I was so surprised. Everything has an ip, right? You have laundry machines. Kind of, you know, smart appliances, any devices, refrigerators, stoves, microwaves, thermostats, garage openers. Even the front door keypad of your home has an IP now. So with the growth in innovation of the Internet of Things now, this kind comes into play too, right? Network operators are continuously seeking technologies to also reduce the carbon footprint. With this disaggregated solution, we were able to reduce power consumption at a 48% reduction when compared to traditional routers. We were also increased port capacity by two and a half times when you compare to traditional routers. What’s another way? Now you can also decommission a disaggregated cluster and reuse its hardware in a developing country where it can be practical for many years to come. Paves the way for more sustainability. Telecommunication companies along with the vendors will keep driving innovation in disaggregated network solutions. And essential future developments such as multi service port functionality on top of the shared pool resources such as open offload and data center sustainability will be key for the future. So in conclusion, traffic growth over the last two decades requires all of us to explore alternative routing solutions. The disaggregated approach brings cost reduction, removal of vendor lock in service innovation, whereas also you have service agility, faster innovation and all of this will be supported by a cloud orchestration for streamlined cluster management, device configuration or config file retrieval. And the future looks pretty good when you include open offload and you think about sustainability. So it seems that we’re getting closer to a more complete, automated, portable and easily scalable network for the future. I’ve attached some bibliography and references. I’d like to bring your attention to item number 33, which is the white paper for this discussion that we presented and created at last year’s SCTE in September. So I urge you to please take a look at that where we go a lot more in detail and may answer some of the questions that you may have. I want to give a big thank you to the Billow individuals for their contributions. Idris Offerov, Delivery team leader for DriveDets and also the co author of this white paper Manan Venkatesen and Tony Tauber, distinguished engineers at Comcast Cable and Bob Gatos, Senior Fellow and Larry Walcott, Fellow at Comcast Cable. And a big thank you to every one of you for taking the time and the patience to listening. Thank you very much. Any questions? We’ll take a few minutes for questions.

Hello, my name is Kemal Shainta and I’m principal Internet analyst at Thousand thanks. Thanks for your presentation. We observed many benefits of using cloth based fabrics over the years and limiting blast radius is namely one of the most important ones. So thanks for calling that out. However, I have several questions regarding the architecture itself and state of maturity of similar solutions. So what’s the current state of rib and fib scalability on the leaves? The thing is. When I used to work for some of the hyperscalers, we tend to use these topologies pretty extensively. However, the problem always was putting them on the edge. Right. Because. RIB at the moment is 1 million prefixes and none of the leaf white boxes were capable of taking that many prefixes from the transit providers.

That’s right.

And now bear in mind that you usually want to have like at least two of them on the edge. That would be the first question. So that begs the question, are you using vendor for the leaf? Because only vendor solutions were capable of having like pizza size boxes having that kind of fibs and ribs. And lastly, are you using this for in the BGP free core in the MPLS environments where these solutions could drive benefit. And you know, there are essentially challenges with this setup as part of which if you don’t work with the vendor, like how are you doing RMAs? What are the operational challenges from that perspective and how do you replace them? Thank you.

Sure. So if I can answer the first part, if you can just help me. So with the ribbon fib scalability with this architecture, now you can offload that onto the pizza boxes, right? So if you just use such as like external route reflectors, where the only job of them is just to take in the routes and reflect them to each of those leaves. Now the leaves have much less processing to do and you basically just offload that ribbon fib to external route reflectors. So that’ll kind of, you know, get you to that part that you want. The other side of it, as you said with the different vendors, so. With us is like you do need to work with each vendor for any hardware RMA’s. The good thing is, you know, if you have a third party provider that can kind of facilitate that for you, if not you, you do have to kind of work with, with that vendor, but you kind of have that even in your current topology. If you have a multi vendor network where you kind of have to work with different vendors and make sure you know, they, they collaborate.

And are you using x86 hardware for the root reflectors or you are deciding to use chassis based from the vendors?

We’re using a combination of both but we’re, we’re kind of testing like both solutions. So we do have you know, the legacy like vendor solution but then also now we’re. Kind of, you know, trialing the new server base as well.

Thank you so much.

Thank you.

Hi, thank you. Chris Grundmann with Full Control. Great talk. I agree with a lot of what you said there.

Thank you.

One thing I’d like to understand obviously sounds like a lot of this is coming from practical experience and I want to know if you could tell us kind of roughly what the percentage split is between chassis based appliances running your network today versus these new spine leaf clusters. And then also the kind of follow up question to that. Just I’m curious as to the migration path. Obviously you know, replacing a chassis with a bigger chassis is a fairly well worn path. We all kind of know how to do that. Pulling out a chassis and replacing it with a spine leaf cluster may be a little bit different. So I wonder if there’s you know, any complications or other aspects to think about in that kind of migration plan.

Sure. Yep. So I started an operation, so I always try to put my operations hat on and you know, kind of see how we do that. So with our footprint say in about like 28 networks, we have about 3, 3 networks which are the spine and leaf separate components. And then we’re trialing it on, on one network with this virtual chassis cluster. So the good part with our, ours was you know, all of these were net new networks where we kind of, you know, just took the existing connections, migrated them over. So it was a very like methodical surgical based where we’re just taking one, one pair of rings at each side, plumb them onto the next, the new network, make sure all the routing is good, all the prefix is good, all the services and then kind of just take that methodical. We had a field trial I think soaking for about maybe three, four weeks. Right. We just wanted to make sure we have a good proof of concept before we start mass, mass trialing that we even took different types of networks, service based access networks, the core backbone. And I believe now we have about close to 3 terabits of traffic on the virtual chassis type. And we kind of, you know, start tried to do it as, as methodical as possible. So you still kind of take the old kind of migration strategy, just plummet into the spine and leaf. It just takes a lot of, a lot of validations.

Sure.

Thanks.

And then do you see this as kind of the future of networking for Comcast? Is this something that eventually 10, 20 years from now we’re going to see all virtual chassis or will it be a hybrid forever?

I think it’ll take some time. Right. Because everything has its challenges and trade offs. So it’s basically like what are you willing to kind of sustain. But I personally, I see this as the future where you’ll have a lot more virtualization. Especially like something like in the keynote that a lot talked about for doing that already on the CMT side. It kind of, you know, you kind of see that going into the core as well.

Thanks.

Thank you. Hi. One from the outside. Sure. Matt P. Tech unaffiliated on slide 16. Can you explain how your virtual chassis handles link failures and rerouting around failed pizza boxes within the virtual chassis without running some form of igp? Or is it simply a separate IGP isolated from the rest of the backbone but still present and active? Thank you. Very easy question that they answer themselves. So it is the latter. So there is a backend. Connection and separately based than the front igp. Thank you. This will probably be a simple one as well. So this, this all could have been, I mean this is SD WAN on the LAN really when you get into the need for automation, since you’ve got so many more devices to manage and that streamlines the implementation and reduces human error. But it also gives you a much larger foot gun because you can now automate so much more of your network in one shot. Have network simulation techniques kept up with the ability to automate the network to essentially proof out this before you commit your network to it and load that ever larger footgun. Yeah, so I personally think the orchestrator will come into play. Right. Or your automation kind of system. So if you have like a staging environment where you can kind of stage those changes or config files, you know, kind of make sure you test that everything, you know, works as possible, say to like a small section of, of your network, make sure there’s no issues and then you can kind of, you know, blast it so Just like anything else, you have to kind of reduce your blast radius. So with this concept, you know, it is kind of like an all or nothing, right? Where if you’re going to deploy that everywhere, so just like anything else, you just take a stage approach with testing and then you can kind of, you know, confirm everything’s working and then deploy it to the, to the rest of your network. Thank you. Hi, Steve Ulrich. I was wondering if you could comment on the state of silicon diversity in the fabric. So you mentioned no vendor lock in. How’s, how’s that work at the fabric level? Do you have multiple silicon implementations of a, of a fabric? Oh, we do. So if I understand your question correctly, we do have multiple vendors that offer that, but then in our solution we kind of picked one vendor to try to keep it consistent. I was specifically curious as to whether or not you had multiple silicon implementations from different vendors for the fabric. So fabrics tend to be fairly proprietary things. I was just kind of curious how you were achieving diversity at the fabric lever. So, so again it’ll, it’ll just depend on, on that type of like cluster, right? So like if you kind of lock in with the, the hardware vendor and then you know, they, they work towards, with, with the other vendors to kind of assimilate that. So are you taking steps to standardize a fabric protocol or something like, like that? Again, it’ll depend on that, on that specific hardware. So, so a lot of their stuff is like proprietary. So like, you know, if you’re wanting like a Jericho chipset versus other chipsets, just depend on your, on your decision. Thank you. Thanks. Hi, I’m Bradley Shaw, VP of Network Engineering at NextLink. I had a question about how far out into the field would you push a virtual chassis type solution? Is that strictly in the D.C. or do you go out to like your. Pop nodes where, or your access nodes where you’re pulling in customer traffic, that kind of thing. So on the core, right now we’re bringing it into the core. We can even get it more to the access side with the multi service. But kind of like Elad mentioned in his keynote, now we’re taking that virtualization even into. The nodes where the CMT is at the head ends are and even pass that with the digital nodes. So a lot of that virtualization is getting more and more closer to the edge. Okay, and then I had another question. Do you have like a list of, you know, like examples of some vendors that are doing the, OR kind of heavily into the DDC DDBR type stuff? Yeah, yeah. So ue space is one HP intel. So you’ll see a lot of like the traditional, like server. Hardware vendors are getting into that space. Okay, cool. Thank you. Thank you.

45:32:66 — 45:40:26
Good day. Scott Johnson, first, congratulations on moving to solutions that are not vendor locked in.

45:40:26 — 45:40:82
Thank you.

45:42:26 — 46:10:04
I noticed that you specify x86 commodity servers for these applications. Recently in the IETF, there’s been a significant analysis of of the power usage of our core infrastructure, and I might suggest that you may wish to explore other processor architectures which are more power efficient for these jobs, such as ARM or the. Eminent RISC V. Thank you.

46:10:04 — 46:14:76
Thank you. Well, great. All right. Thank you everyone. Appreciate.

Related Service Provider network infrastructure resources