- Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI — Along With AWS and Microsoft Azure — Bring Compute Resources to DGX Cloud Lepton Marketplace to Meet AI Demand
- Hugging Face Integrates DGX Cloud Lepton Into Training Cluster as a Service, Expanding AI Researcher Access to Scalable Compute for Model Training
- NVIDIA and Leading European Venture Capitalists Offer Marketplace Credits to Portfolio Companies to Accelerate Startup Ecosystem
NVIDIA GTC Paris at VivaTech—NVIDIA today announced the expansion of NVIDIA DGX Cloud Lepton™ — an AI platform featuring a global compute marketplace that connects developers building agentic and physical AI applications — with GPUs now available from a growing network of cloud providers.
Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI are now contributing NVIDIA Blackwell and other NVIDIA architecture GPUs to the marketplace, expanding regional access to high-performance compute. AWS and Microsoft Azure will be the first large-scale cloud providers to participate in DGX Cloud Lepton. These companies join CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda and Yotta Data Services in the marketplace.
To make accelerated computing more accessible to the global AI community, Hugging Face is introducing Training Cluster as a Service. This new offering integrates with DGX Cloud Lepton to seamlessly connect AI researchers and developers building foundation models with the NVIDIA compute ecosystem.
NVIDIA is also working with leading European venture capital firms Accel, Elaia, Partech and Sofinnova Partners to offer DGX Cloud Lepton marketplace credits to portfolio companies, enabling startups to access accelerated computing resources and scale regional development.
“DGX Cloud Lepton is connecting Europe’s developers to a global AI infrastructure,” said Jensen Huang, founder and CEO of NVIDIA. “With partners across the region, we’re building a network of AI factories that developers, researchers and enterprises can harness to scale local breakthroughs into global innovation.”
DGX Cloud Lepton simplifies the process of accessing reliable, high-performance GPU resources within specific regions by unifying cloud AI services and GPU capacity from across the NVIDIA compute ecosystem onto a single platform. This enables developers to keep their data local, supporting data governance and sovereign AI requirements.
In addition, by integrating with the NVIDIA software suite — including NVIDIA NIM™ and NeMo™ microservices and NVIDIA Cloud Functions — DGX Cloud Lepton streamlines and accelerates every stage of AI application development and deployment, at any scale. The marketplace works with a new NIM microservice container, which includes support for a broad range of large language models, including the most popular open LLM architectures and more than a million models hosted publicly and privately on Hugging Face.
For cloud providers, DGX Cloud Lepton includes management software that continuously monitors GPU health in real time and automates root-cause analysis, minimizing manual intervention and reducing downtime. This streamlines operations for providers and ensures more reliable access to high-performance computing for customers.
NVIDIA DGX Cloud Lepton Speeds Training and Deployment
Early-access DGX Cloud Lepton customers using the platform to accelerate their strategic AI initiatives include:
- Basecamp Research, which is speeding the discovery and design of new biological solutions for pharmaceuticals, food and industrial and environmental biotechnology by harnessing its 9.8 billion-protein database to pretrain and deploy large biological foundation models.
- EY, which is standardizing multi-cloud access across the global organization to accelerate the development of AI agents for domain- and sector-specific solutions.
- Outerbounds, which enables customers to build differentiated, production-grade AI products powered by the proven reliability of open-source Metaflow.
- Prima Mente, which is advancing neurodegenerative disease research at scale by pretraining large brain foundation models to uncover new disease mechanisms and tools to stratify patient outcomes in clinical settings.
- Reflection, which is building superintelligent autonomous coding systems that handle the most complex enterprise engineering tasks.
Hugging Face Developers Get Access to Scalable AI Training Across Clouds
Integrating DGX Cloud Lepton with Hugging Face’s Training Cluster as a Service offering gives AI builders streamlined access to the GPU marketplace, making it easy to reserve, access and use NVIDIA compute resources in specific regions, close to their training data. Connected to a global network of cloud providers, Hugging Face customers can quickly secure the necessary GPU capacity for training runs using DGX Cloud Lepton. Mirror Physics, Project Numina and the Telethon Institute of Genetics and Medicine will be among the first Hugging Face customers to access Training Cluster as a Service, with compute resources provided through DGX Cloud Lepton. They will use the platform to advance state-of-the-art AI models in chemistry, materials science, mathematics and disease research.
“Access to large-scale, high-performance compute is essential for building the next generation of AI models across every domain and language,” said Clément Delangue, cofounder and CEO of Hugging Face. “The integration of DGX Cloud Lepton with Training Cluster as a Service will remove barriers for researchers and companies, unlocking the ability to train the most advanced models and push the boundaries of what’s possible in AI.”
DGX Cloud Lepton Boosts AI Startup Ecosystem
NVIDIA is working with Accel, Elaia, Partech and Sofinnova Partners to offer up to $100,000 in GPU capacity credits and support from NVIDIA experts to eligible portfolio companies through DGX Cloud Lepton.
BioCorteX, Bioptimus and Latent Labs will be among the first to access DGX Cloud Lepton, where they can discover and purchase compute capacity and use NVIDIA software, services and AI expertise to build, customize and deploy applications across a global network of cloud providers.
Availability
Developers can sign up for early access to NVIDIA DGX Cloud Lepton.
Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
https://nvidianews.nvidia.com/news/nvidia-dgx-cloud-lepton-connects-europes-developers-to-global-nvidia-compute-ecosystem