Initial customers include IBM, Mistral AI and Cohere
LIVINGSTON, N.J., April 15, 2025 /PRNewswire/ — CoreWeave, the AI Hyperscaler™, today announced Cohere, IBM and Mistral AI are the first customers to gain access to NVIDIA GB200 NVL72 rack-scale systems and CoreWeave’s full stack of cloud services — a combination designed to advance AI model development and deployment.
AI innovators across enterprises and other organizations now have access to advanced networking and NVIDIA Grace Blackwell Superchips purpose-built for reasoning and agentic AI, underscoring CoreWeave’s consistent record of being among the first to market with advanced AI cloud solutions.
“CoreWeave is built to move faster – and time and again, we’ve proven it by being first to operationalize the most advanced systems at scale,” said Michael Intrator, Co-Founder and Chief Executive Officer of CoreWeave. “Today is a testament to our engineering prowess and velocity, as well as our relentless focus on enabling the next generation of AI. We are thrilled to see visionary companies already achieving new breakthroughs on our platform. By delivering the most advanced compute resources at scale, CoreWeave empowers enterprise and AI lab customers to innovate faster and deploy AI solutions that were once out of reach.”
“Enterprises and organizations around the world are racing to turn reasoning models into agentic AI applications that will transform the way people work and play,” said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. “CoreWeave’s rapid deployment of NVIDIA GB200 systems delivers the AI infrastructure and software that are making AI factories a reality.”
CoreWeave offers advanced AI cloud solutions while maximizing efficiency and breaking performance records. The company recently achieved a new industry record in AI inference with NVIDIA GB200 Grace Blackwell Superchips, reported in the latest MLPerf v5.0 results. MLPerf Inference is an industry-standard suite for measuring machine learning performance across realistic deployment scenarios.
Last year, the company was among the first to offer NVIDIA H100 and NVIDIA H200 GPUs, and was one of the first to demo NVIDIA GB200 NVL72 systems.
CoreWeave’s portfolio of cloud services are optimized for NVIDIA GB200 NVL72, offering customers performance and reliability with CoreWeave Kubernetes Service, Slurm on Kubernetes (SUNK), CoreWeave Mission Control, and more. CoreWeave’s NVIDIA Blackwell-accelerated instances scale to up to 110,000 Blackwell GPUs with NVIDIA Quantum-2 InfiniBand networking.
In addition to IBM, Mistral AI, and Cohere, CoreWeave recently announced a multi-year contract with OpenAI, creator of ChatGPT.
About CoreWeave
CoreWeave, the AI Hyperscaler™, delivers a cloud platform of cutting-edge software powering the next wave of AI. The company’s technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and Europe. CoreWeave was ranked as one of the TIME100 most influential companies and featured on Forbes Cloud 100 ranking in 2024. Learn more atwww.coreweave.com.
https://www.techdogs.com/tech-news/pr-newswire/coreweave-launches-nvidia-gb200-grace-blackwell-systems-at-scale