Claims to be the first AI hyperscaler to do so
CoreWeave has made Nvidia RTX Pro 6000 Blackwell server-based instances generally available on its AI cloud platform.
Announced on July 9, the company claims to be the first to offer the instances, which join the already offered Nvidia GB200 NVL72 system and Nvidia HGX B200 platform.
According to CoreWeave, the RTX Pro 6000 instances provide up to 5.6x faster LLM inference and 3.5x faster text-to-video generation than the previous generation, and are ideal for inferencing models of up to 70 billion parameters.
Each instance features 8x RTX Pro 6000 GPUs, 128 Intel Emerald Rapids vCPUs, 1TB System RAM, 100 GBps Networking Throughput, and 7.68TB local NVMe storage.
“CoreWeave is built to move at the speed of innovation, and with the new RTX PRO 6000-based instances, we’re once again first to bring advanced AI and graphics technology to the cloud,” said Peter Salanki, co-founder and CTO of CoreWeave.
“This is a major step forward for customers building the future of AI, as it gives them the ability to optimize and scale on GPU instances that are ideal for their applications, and a testament to the speed and reliability of our AI cloud platform.”
Cirrascale Cloud Services previously announced future plans for an inference-as-a-service platform that will use Nvidia B200s and the Nvidia RTX Pro 6000 Blackwell Server Edition GPUs.
Earlier this month, Dell shipped the first Nvidia GB300 NVL72 rack-scale solution to CoreWeave. CoreWeave plans to bring GB300 servers online throughout 2025.
CoreWeave has had a busy July. After previously having a bid rejected, the AI cloud provider has acquired data center provider Core Scientific for $9bn.
https://www.datacenterdynamics.com/en/news/coreweave-now-offers-nvidia-rtx-pro-6000-blackwell-servers/

