Google Cloud Next 2025 was a showcase of groundbreaking AI advancements. In this post, I’m excited to share some of my personal highlights and key takeaways from the conference.
Gemini 2.5 and the Live API
Google continues to push the boundaries of AI with their latest “thinking model” Gemini 2.5. Thinking refers to an internal reasoning process using the first output tokens, allowing it to solve more complex tasks. Gemini 2.5 is leading in most benchmarks while maintaining focus on speed and short Time To First Token (TTFT).
The unveiling of Gemini 2.5 truly stole the show. Its Flash version enables the new “Live API” which allows streaming audio and video directly into Gemini. The possibilities are endless, but one demo particularly caught my attention. I witnessed the Live API being used as a voice assistant for a website, with screen-sharing capabilities that crossed the website barrier! The assistant could be interrupted simply by speaking, had natural-sounding responses and did not have an unnatural delay before answering. It could actually help users complete tasks like setting up DNS settings, which is a game-changer for user support and sparks my imagination about what else we can do with Live API!
Agent Development Kit (ADK)
The Agent Development Kit (ADK) is a game-changer for easily building sophisticated multi-agent applications. It is an open-source framework designed to streamline the development of multi-agent systems while offering precise control over agent behavior and orchestration. It also supports the newly announced Agent 2 Agent (A2A) protocol which Google is positioning as an open, secure standard for agent-agent collaboration, driven by a large community of Technology, Platform and Service partners.
Key Features of ADK:
- Flexible Orchestration: Define workflows using sequential, parallel, or loop agents, or use LLM-driven dynamic routing for adaptive behavior.
- Native Multi-Agent Architecture: Build scalable applications by composing specialized agents in a hierarchy.
- Rich Tool Ecosystem: Equip agents with pre-built tools (Search, Code Execution), custom functions, third-party libraries (LangChain, CrewAI), or even other agents as tools.
- Built-in Evaluation: Systematically assess agent performance.
- MCP support: Agents built with ADK act as MCP (Model Context Protocol) clients and thus offer native integration with any MCP servers.
ADK powers the newly announced Agentspace, Google’s research agent and Google customer support agents. Take a look at the Agent Garden for some examples! I will certainly be trying to build some internal multi-agents using ADK.
Agentspace
AgentSpace aims to put AI tools in the hands of every employee through an easy setup process. Having worked with it at Xebia, I’ve seen how effectively it can centralize workflows. It connects to key tools such as Google Workspace, Atlassian, and Slack, enabling employees to search company knowledge and immediately act on it, for instance, by creating emails or Jira tickets without leaving the AgentSpace platform.
The announcements at Next ’25 included several enhancements:
- Unified Enterprise Search: Employees can access Agentspace’s search, analysis, and synthesis capabilities directly from Chrome’s search box.
- No-Code Agent Designer: A new no-code Agent Designer allows employees to create custom agents for their specific needs, regardless of their technical skills.
- Expert Google-Built Agents: Agentspace includes Deep Research and Idea Generation agents, in addition to NotebookLM for Enterprise.
Veo 2: High-Quality Video Generation
Veo 2 is now production-ready in the Gemini API and Vertex AI. It empowers developers to generate high-quality videos directly within their applications from both text and image prompts. Veo 2 is able to follow both simple and complex instructions, as well as simulate real-world physics in a wide range of visual styles and all generated videos are watermarked with SynthID.
There were some impressive demos on stage, and I found its prompt adherence and sequence stability particularly impressive when I tried it myself.
BigFrames 2.0
BigFrames provides a Pythonic DataFrame and machine learning (ML) API powered by the BigQuery engine. bigframes.pandas provides a pandas-compatible API for analytics, and bigframes.ml offers a scikit-learn-like API for ML.
I saw its scalability in action on stage and was impressed by how easily you can adapt your pandas import code to allow BigQuery engine to do the analysis. A compelling demo showcased automatic FAQ generation based on a backlog of customer support calls. The process involved transcribing, embedding, clustering, and then sampling to generate the question and answers, all through a familiar pandas interface executed on BigQuery.
Conclusion
The pace of innovation in AI is truly accelerating, making it both demanding and thrilling to stay current. It’s a genuine privilege to witness and participate in these advancements, and I’m personally eager to explore practical use cases for these powerful new capabilities to ensure we benefit from this fast-moving landscape.