Dell launches powerful AI supercomputing system with Nvidia’s most advanced superchips : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

CoreWeave’s new GB300-powered clusters enhance AI model training and inference, expanding performance for top clients like OpenAI.

American tech giants Dell have recently announced that they delivered the industry’s first-ever systems built on the NVIDIA GB300 platform to CoreWeave, an AI cloud service provider(CSP). Coreweave also confirmed that it has deployed the Blackwell Ultra-based cluster in partnership with its data center hosting provider, Switch.

The initial rollouts of both companies will include Dell Integrated Racks with 72 Blackwell Ultra GPUs, 36 Arm-based 72_Core Grace CPUs, and 36 BlueField DPUs per rack.

These complex systems are designed for maximized training and inference performance, so they need to be liquid cooled due to their excessive power consumption.

“Dell’s delivery of Nvidia GB300-powered solutions is more than a milestone,” a statement from the company said.

“It reflects the trust our customers and partners continue to place in our expertise. By seamlessly engineering the compute, the network and the storage under one roof and fine-tuning with integration and deployment services, we help our customers move at unprecedented speed and scale,” it continued.

How does CoreWeave benefit from the deployment?

Using the latest GB300 series chip allows CoreWeave to handle more language learning model training, reasoning processes, and inferencing. The company currently deploys more GB300-NVL-based racks, so total performance is touted to increase.

CoreWeave also has top-notch companies like OpenAI as its customers. They aim to ‘develop and deploy larger, more complex AI models that are exponentially faster than ever before.’

On the other hand, the new chips introduced by NVIDIA will improve their existing Grace Blackwell systems.

Addressing the widening gap

The GB300 NVL72 deployment highlights the widening gap in AI infrastructure, where access to top-tier chips offers major competitive advantages. AI training power has recently doubled every 3.4 months, slashing costs by up to 99.5% with advanced processors.

This shift comes amid tighter U.S. export controls, especially on high-end AI chips sent to certain markets, impacting global AI growth. Nvidia’s $4.5 billion inventory charge due to China trade restrictions underscores the stakes.

Early adopters of next-gen chips gain a crucial edge, as each generation supports larger, more complex models—essential as global data volume surges toward 175 zettabytes by 2025.

A few observations

It’s interesting to observe that Dell and CoreWeave are deploying the GB300 NVL72 racks merely seven months after they deployed the first GB200 NVL72 machines. This raises questions about how long the original GB200 platform will stay relevant, especially since it was slightly delayed.

For cloud service providers, it now makes more sense to invest in the more powerful Blackwell Ultra systems. There may be strong demand building up for Blackwell Ultra, which could to even higher sales for NVIDIA in the second half of the year.

Furthermore, CoreWeave is also focusing on renting cloud computing services to companies that need to use these powerful NVIDIA chips to train and run their AI software.

https://interestingengineering.com/innovation/dell-supercomputing-system-with-nvidias-superchips