Nvidia And Google Cloud Unveil New Innovations: 5 Gemini, Blackwell And AI : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

AI superstars Google Cloud and Nvidia have revamped their innovation partnership around Gemini, Blackwell, Gemma, GPUs and more aimed at boosting AI for customers.

AI titans Google Cloud and Nvidia are doubling down on their partnership by launching new integrated solutions around Gemini, Blackwell GPUs and much more with a focus on AI workloads.

From Gemini models now being able to be installed directly in data centers, to new performance optimization between Gemini workloads and Nvidia GPUs—here are five new enhancements to the Google Cloud and Nvidia partnership unveiled this week that every customer and channel partner should know.

Deploying Gemini Models On-Premise Via Nvidia Blackwell

Google Gemini can now be deployed on-premises with Nvidia Blackwell on the Google Distributed Cloud.

“Organizations will now be able to deploy Gemini models securely within their own data centers, unlocking agentic AI for customers,” said Uttara Kumar, senior product marketing manager for Nvidia, in a blog post this week.

Google’s Gemini family of models represent the cloud company’s most advanced and versatile AI models to date, designed for complex reasoning, coding and multimodal understanding. Google Distributed Cloud is the company’s fully managed solution for on-premises, air-gapped environments and edge computing.

[Related: Onix CEO: Google Cloud Has Agentic AI Lead Vs. AWS, Microsoft]

The new partnership will enable Nvidia and Google customers to innovate with Gemini while maintaining full control over their information as well as meeting privacy and compliance standards.

Google Gemini And Gemma Optimzed For Nvidia GPUs

Nvidia and Google have worked on performance optimizations to make Gemini-based inference workloads run more efficiently on Nvidia GPUs, particularly within Google Cloud’s Vertex AI platform.

This revamped partnership lets Google serve a significant amount of user queries for Gemini models on Nvidia accelerated infrastructure across Vertex AI and the Google Distributed Cloud.

“In addition, the Gemma family of lightweight, open models have been optimized for inference using the Nvidia TensorRT-LLM library and are expected to be offered as easy-to-deploy Nvidia NIM microservices,” said Nvidia’s Kumar.

These optimizations maximize performance and make advanced AI more accessible to developers to run their workloads on various architectures across data centers to local Nvidia RTX-powered PCs and workstations.

Launch Of New Google Cloud And Nvidia Developer Community

Google Cloud and Nvidia launched a new joint developer community that brings experts and peers together to accelerate cross-skilling and innovation.

“By combining engineering excellence, open-source leadership and a vibrant developer ecosystem, the companies are making it easier than ever for developers to build, scale and deploy the next generation of AI applications,” said Nvidia’s Kumar.

The companies said they’re supporting the developer community by optimizing open-source frameworks, such as JAX, for seamless scaling on Blackwell GPUs and enabling AI workloads to run efficiently across tens of thousands of nodes.

Confidential VMs And GKE Nodes With Nvidia H100 GPUs

Google Cloud’s Confidential Virtual Machines on the accelerator-optimized A3 machine series with Nvidia H100 GPUs is now available in preview, as well as its Confidential Google Kubernetes Engine (GKE) nodes.

Confidential VMs can help ensure the confidentiality and integrity of AI, machine learning, and scientific simulation workloads using protected GPUs while the data is in use, said Daniel Rohrer, vice president of Nvidia software product security in a recent blog post.

“Putting data and model owners in direct control of their data’s journey—Nvidia Confidential Computing brings advanced hardware-backed security for accelerated computing, providing more confidence when creating and adopting innovative AI solutions and services,” said Nvidia’s Rohrer.

Google’s New A4 VMs Now Generally Available On Nvidia Blackwell GPUs

In February, Google Cloud launched its new A4 virtual machines that feature eight Blackwell GPUs interconnected by Nvidia NVLink, offering a significant performance boost over the previous generation.

Google Cloud’s new A4 VMs on Nvidia HGX B200 are now generally available.

Google’s new VMs and AI Hypercomputer architecture are accessible via services like Vertex AI and GKE, enabling customers to choose a path to develop and deploy agentic AI applications at scale.

https://www.crn.com/news/ai/2025/nvidia-and-google-cloud-unveil-new-innovations-5-gemini-blackwell-and-ai-things-to-know