What Are Neocloud Providers?
The term “neocloud” refers to a cloud provider that primarily offers GPU-as-a-Service (GPUaaS). With the artificial intelligence (AI) gold rush, a new wave of cloud providers has emerged with a dedicated focus on GPUaaS. While the exact origin of the word “neocloud” and its initial usage are unclear, it has progressively become an industry term for cloud providers specializing exclusively in GPUaaS.
Traditional cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, offer a comprehensive portfolio of cloud services. Unlike these giants, neoclouds concentrate on providing infrastructure specifically tailored to the demanding requirements of data-intensive workloads – particularly those related to AI, machine learning (ML), and analytics.
Initially, these neoclouds raced to build GPU clouds and stack hardware, hoping to deliver just infrastructure and earn “easy money.” Sadly, many underestimated the challenge of delivering consistent performance. The providers that remained understood that high-performance computing (HPC) and AI teams will not trust unoptimized infrastructure – and that they will pay significantly for reliable, stable, and high-performance AI infrastructure.
Key Characteristics of Neoclouds
- GPU infrastructure: Neoclouds focus heavily on providing access to powerful and often the latest generation of GPUs from Nvidia (like Hopper and Blackwell) and other vendors, which are essential for demanding AI and data-intensive tasks.
- Optimized for AI: Unlike hyperscalers, neoclouds build their infrastructure, networking, and software stacks specifically to meet the unique demands of AI-specific tasks, offering better performance and efficiency.
- Performance and innovation: Neoclouds are measured by their ability to deliver the most reliable, stable, and high-performance AI infrastructure. The intense competition in this space pushes these providers to utilize the most innovative solutions, whether the newest GPUs or most advanced software.
- Flexible business models: Neoclouds employ pay-as-you-go pricing models that eliminate the need for large capital expenditures on GPU hardware.
Neocloud Market Leaders
- CoreWeave: largest and most dominant player specializing in high-performance GPU resources optimized for AI workloads, positioning itself as an alternative to traditional cloud providers
- Lambda Labs: one of the few offering both cloud and on-premises GPU solutions, providing GPU cloud services tailored specifically for deep learning and AI research, backed by substantial industry investments
- Crusoe: offering sustainable GPU computing powered by stranded and renewable energy sources, partnering with tech giants on AI-focused data centers
- WhiteFiber: leading GPU cloud provider by Bit Digital, designed to redefine high-performance AI infrastructure. Unlike traditional cloud providers, WhiteFiber optimizes performance across the entire stack—from data centers to compute, storage, networking, and backbone.
- Nebius: emerging from Yandex operations, a rapidly growing European-based provider leveraging Nvidia GPUs and aiming to challenge traditional cloud providers
- Together AI: focusing on open-source large language models (LLMs) and inference-optimized cloud infrastructure
Why Businesses Need Neoclouds
Neoclouds address a crucial gap in modern computing environments, where data-intensive applications require significant computational power. Traditional cloud infrastructure, offered by traditional cloud providers, struggles to efficiently process AI workloads, complex analytics, and real-time simulations, resulting in performance bottlenecks and inefficiencies. Neoclouds – by leveraging the best infrastructure, networking, and software stacks – resolve these challenges, providing businesses with the infrastructure necessary to effectively run their AI-intensive applications.
Main Challenges of Neoclouds
- High GPU costs: Acquiring the industry’s most powerful GPUs, which also quickly devalue with every new model, requires significant capital, creating a high barrier to entry for new providers.
- High energy costs: Power-hungry AI workloads drive high electricity costs, demanding sustainable solutions.
- Multi-tenant environment: Neoclouds’ success depends on delivering peak performance, measured by optimal job completion times (JCTs). This requires predictable, lossless GPU connectivity, a major challenge in dynamic multi-tenant environments.
- Market competition: Price wars, triggered by a growing number of neoclouds and ongoing competition among hyperscalers, are squeezing margins and pushing neoclouds to be laser-focused on operational efficiency.
- Vendor lock-in and ecosystem limitations: Nvidia dominates GPU, networking, and software supply, creating vendor lock-in. Ongoing supply shortages, particularly impacting smaller neoclouds, not only delay new cluster deployments but also limit their negotiation power and hinder cost control.
DriveNets and Neoclouds
Neoclouds represent one of the primary focus areas for DriveNets Network Cloud-AI. DriveNets provides a high-performance, lossless Ethernet solution specifically designed for AI networking backend fabrics, which are critical for neocloud providers. With DriveNets’ solution, neoclouds can effectively mitigate many of their infrastructure challenges, leveraging the best of both worlds – combining maximum GPU performance with the advantages of an open-standard Ethernet-based solution.
DriveNets offers neoclouds the following main benefits:
- High performance: proven ability to deliver job completion time (JCT) improvements over alternative Ethernet solutions in both neocloud and hyperscaler networks
- Unified fabric: seamless intergation of compute and storage under a single fabric, simplifying operations compared to traditional segmented AI infrastructures
- Rapid deployment: high performance from day one with minimal network fine-tuning, accelerating time-to-value
- Open architecture: built on widely recognized Ethernet protocol and compatible with any NIC, GPU, and optics hardware component
- Multi-tenant environment: inherent traffic isolation, effectively mitigating common multi-tenancy issues like the “noisy neighbor” effect
- End-to-end support: DriveNets Infrastructure Services (DIS) team helping customers to quickly build GPU clusters – from hardware selection and procurement to installation and fine-tuning
Conclusion
Neoclouds represent a specialized category of cloud providers focusing specifically on GPU-as-a-Service to meet the demanding requirements of AI and data-intensive applications. While they address performance limitations faced by traditional cloud providers, neoclouds encounter several practical challenges – including high GPU acquisition costs, significant energy consumption, complexities in multi-tenant environments, increased market competition, and dependency on a limited number of vendors (like Nvidia).
DriveNets Network Cloud-AI offers an Ethernet-based networking solution to mitigate these infrastructure challenges by providing predictable, high-performance GPU connectivity based on fabric-scheduled architecture.
Read more about DriveNets Network Cloud-AI
Additional AI Workload Resources
White Paper
- Utilizing Distributed Disaggregated Chassis (DDC) for Back-End AI Networking Fabric
- Resolving the AI Back-End Network Bottleneck with Network Cloud-AI
- Meeting the Challenges of the AI-Enabled Datacenter: Reduce Job Completion Time in AI Clusters
- Analysis of Data Traffic Distribution Solutions
Blog
- Why InfiniBand Falls Short of Ethernet for AI Networking
- Reduce AI Job Completion Time with DDC
- Can You Really Compare Clos to Chassis when running AI applications?
- Network Cloud-AI – From Theory to Practice
Video